Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 February 2017, Volume 43 Issue 2
    

  • Select all
    |
  • HU You,LI Renfa,WU Wufei
    Computer Engineering. 2017, 43(2): 1-5. https://doi.org/10.3969/j.issn.1000-3428.2017.02.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is unavoidable that Vehicular Heterogeneous Network(VHN) need to exchange data by a gateway between each other.The data encapsulation method between heterogeneous networks is important for improving network protocol forwarding efficiency and determines the performance and reliability of the gateway.To improve the efficiency of gateway data forwarding,this paper proposes a new CAN/FlexRay network gateway data encapsulation method.Signals passing through the gateway in CAN and FlexRay networks are encapsulated into frames to make each signal frame contain more effective signals and improve the utilization of data.At the same time,it refines the BCBFD_LFS algorithm to confirm the optimum static time slot and decreases the time of operating message.Experimental result shows that the new method can improve the schedulability of messages,the network bandwidth utilization,and the data forwarding capability of gateway in VHN.
  • XIAO Jing,LIU Wei,TANG Lun
    Computer Engineering. 2017, 43(2): 6-15. https://doi.org/10.3969/j.issn.1000-3428.2017.02.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to guarantee the message forwarding efficiency in Vehicular Ad Hoc Network(VANET),this paper presents a routing algorithm based on task allocation model.Network communities are divided according the social features of the node.Then,the consultation mechanism is used to allocate emergency safe and non-safe message tasks to communities and the community then signs tasks to the node.Incentive mechanisms are establised to improve the distribution efficiency of message tasks.Theoretical analysis and simulation results show that compared with other routing algorithms(such as Simbet,Prophet,Spray and Wait),the proposedalgorithm improves the messages delivery ratio,reduces the routing overhead,and meet the Quality of Service(QoS) requirement of diffrernt types of message tasks.
  • ZHU Yan,LI Hongwei,FAN Chao,XU Donghao,SHI Fanglin
    Computer Engineering. 2017, 43(2): 16-20. https://doi.org/10.3969/j.issn.1000-3428.2017.02.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Taxi Global Position System(GPS) data contain macro information about the behavior of urban traffic and moving object behavior,from which valuable anomalous trajectory patterns can be mined.The location,geometry and travel time are taken as the spatial and temporal characteristics of the taxi trajectory respectively.According to the deviation of the feature,the trajectory anomalies are divided into temporal,space and spatio-temporal outliers.The trajectories of the same starting and ending points are extracted from the trajectory data,and are partitioned into segments.The similarity between trajectories is calculated and clustering based on distance and density is carried out.Frequent and the sparse trajectories are preliminary separated by the spatial characteristies.Based on kσ criterion,the separation threshold of temporal anomaly is determined to realize the classification of the temporal characteristic,and finally the trajectory outlier detection of the taxi is realized.The experimental results show that the method can extract personalized route as well as abnormal parking location and traffic section from abnormal trajectories,providing reference information for intelligent transportation as well as efficient logistics planning and execution.
  • ZHAO Yingying,TAN Xianhai
    Computer Engineering. 2017, 43(2): 21-25,32. https://doi.org/10.3969/j.issn.1000-3428.2017.02.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Two-dimensional Feature Fusion(2DFF) method based on two-dimensional feature matrix,i.e.,two-dimensional principal component analysis can the goal of feature fusion by decreasing the dimensions of the feature matrix,but it performs well got only when the difference in the dimensions of feature vectors is small.Some zeros after every single feature vector to get a two-dimensional feature matrix in the construction method of feature matrix of traditional 2DFF,which may change attributes of original feature vector at the condition that the difference in dimension of each feature vectors is huge and decreases the identification rate.Since the disadvantage above,a new construction method of feature matrix based on Singular Value Decomposition(SVD) is proposed.The new method groups all feature vectors end to end as a new one-dimensional feature vector which is decomposed into a two-dimensional feature matrix by keeping the phase of the signal unchanged based on the decomposition feature of SVD.Experimental result shows that the new method has a higher identification rate than traditional 2DFF feature construction method difference in the dimensions of feature vectors.
  • WU Jun,LI Wenjie,GENG Lei,XIAO Zhitao,ZHANG Fang,LI Yuelong
    Computer Engineering. 2017, 43(2): 26-32. https://doi.org/10.3969/j.issn.1000-3428.2017.02.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of anti-collision for intelligent vehicle control,a vehicle detection and ranging method based on monocular vision is presented.Multiscale Block-Local Binary Pattern(MB-LBP) and Adaboost are used to extract vehicle candidate area,the horizontal edge and gray features are used to eliminate false detection,the problem of the interference of road and green belts is solved effectively.The improved shadow location method is used to gain exact position of the vehicle,the accuracy of distance measurement is improved.A camera model based on position information is built to measure the vehicle distance ahead.Experimental results show that the average detection rate of preceding vehicles is 98.42% and the average error of vehicle ranging is 0.71 m.
  • YE Linquan,ZHU Hui,MEI Tao
    Computer Engineering. 2017, 43(2): 33-37,42. https://doi.org/10.3969/j.issn.1000-3428.2017.02.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Current related researches on parking methods are suitable for parallel parking scenario and vertical parking scenario.So,a general autonomous parking path planning scheme is proposed.The scheme takes the parking time as performance index,and formulates a time-optimal control problem of parking path planning with the vehicle kinematic constraint and path constraint.The pseudospectral method is used to solve the optimal control problem,and the obtained solution is guaranteed to be consistent with the path constraint by grid refining,so as to obtain the collision-free feasible parking path.Path planning in parallel parking scenario and vertical parking scenario are solved.Simulation experiment and real vehicle experiment verify the generality and feasibility of the scheme.
  • LIU Xiang,ZHANG Zhedi,LI Jianfeng,ZHANG Xiaoyun,YU Junwei
    Computer Engineering. 2017, 43(2): 38-42. https://doi.org/10.3969/j.issn.1000-3428.2017.02.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that it is hard to review the accident scene or distinguish the authenticity of the known data when the information and data are insufficient in the traffic accident video,an algorithm based on the reverse method towards the earth celestial physics theory is introduced,which is aimed to estimate the geo-temporal position of the video through the shadow trajectories data to distinguish the authenticity.Utilizing the principle of the celestial bodies’ movement to generate the formula to describe the rule of the shadow,and it finds out the relationship among the length of the shadow,real solar time and average solar time by using the data fitting.It determines the local latitude.Towards the atmospheric refraction,the error between the real solar time and the average one,taking the advantage of solar azimuth and altitude,it optimizes the model,through which the accurate position can be estimated.Experimental result shows that the model can retrieve information and recovery scene.
  • YU Zhaowei,WU Xiaobo,SHEN Lin
    Computer Engineering. 2017, 43(2): 43-47,56. https://doi.org/10.3969/j.issn.1000-3428.2017.02.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to ensure the safety and efficiency of automatic driving,the vanishing points of road images are found based on Hough transform and voting method,thereby establishing the Dynamic Region of Interest(DROI).Then the illumination invariant lane detection algorithm is designed according to the features of white and yellow lanes to realize the detection of lane area under various complicated illumination conditions such as night and tunnel.On this basis,a polar angle constraint algorithm is designed to screen the candidate lanes to get the final effective lane.Experimental results show that the algorithm has a good detection effect which can reach an average accurate detection rate of 93.5% under various complicated illumination conditions.
  • ZHAO Xing,LI Shijun,YU Wei,YANG Sha,DING Yonggang,HU Yahui
    Computer Engineering. 2017, 43(2): 48-56. https://doi.org/10.3969/j.issn.1000-3428.2017.02.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The irregularity of data quality from different Internet platforms,which is caused by openness and multi-source,has affected negatively knowledge acquiring from Internet in big data environment.Aiming at this problem,this paper proposes a Web data source quality assessment method.It establishes a unified data model and data quality standard model for multi-source Internet platform,gives quality standards measurement and representation methods for full sample data analysis of big data,and achieves the unity of Web data source quality metrics by comprehensive assessment of multidimensional data quality.Experimental results show that this method can comprehensively measure data quality of Internet platforms,provide accurate and efficient quality evaluation results for users.
  • NIE Wenhui,ZENG Cheng,JIA Dawen
    Computer Engineering. 2017, 43(2): 57-62. https://doi.org/10.3969/j.issn.1000-3428.2017.02.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing methods or models of microblog hot topics detection are sensitive to the quantity and the scale of microblog,and the detection process is slow.Hence,this paper proposes a topic model based on heat matrix.It uses the heat matrix to obtain heat and the topic-word probability distribution of every latent topic,and uses the common heat of words to extract the semantic relationship between words.Then the hot topics and hot words can be identified accurately.Experimental results on real microblog show that,compared with Latent Dirichlet Allocation(LDA) model,the proposed model has higher efficiency and accuracy rate.It can detect the hot topics which are consistent with real-time events,so that it has better effect in hot spot identification.
  • XIONG Anping,WANG Yunping,ZOU Yang
    Computer Engineering. 2017, 43(2): 63-67. https://doi.org/10.3969/j.issn.1000-3428.2017.02.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In HBase,the operations are written to database in the form of appending data.HBase Compaction mechanisms occupy plenty of system resources,which affects read performance.To solve this problem,a mechanism based on data redundancy is proposed.By compacting the column files whose ratio of deleted data equals the threshold,the algorithm can reduce space occupation because it reduces the number of files while cleaning useless data.Experimental result indicates,compared with the original HBase Compaction mechanism,which only considers the size and number of files and time interval,the proposed Compaction mechanism can improve HBase system query efficiency and enhance HBase Major compaction capability.
  • WANG Yang,YOU Jinguo,ZHANG Ting,ZHANG Zhengfan
    Computer Engineering. 2017, 43(2): 68-73. https://doi.org/10.3969/j.issn.1000-3428.2017.02.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Data cubes are the core data model of data warehouses.The corresponding data cube lattices facilitate querying and navigation for its preserving semantics of rolling-up and drilling-down.But the intrinsic structure characteristics of data cubes have not yet been systematically researched.To address this issue,this paper studies the structure and the analytical model of data cubes from the graph view.The experimental results show that data cube lattices have different structural characteristics in degree distribution,aggregation coefficient,average shortest path and so on,compared with random networks and complex networks.Further the data cube lattice analytical model is established by utilizing the intrinsic structure characteristics.
  • ZHONG Chuan,CHEN Jun
    Computer Engineering. 2017, 43(2): 74-78. https://doi.org/10.3969/j.issn.1000-3428.2017.02.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the large scale and high sparsity degree of user rating data and poor real-time capability of direct similarity calculation,this paper proposes a layered Exact Euclidean Locality Sensitive Hashing(E2LSH)algorithm based on p-stable distribution.It finds similar users to improve computing efficiency by using E2LSH algorithm,and uses weighted mean method to predict score for not rated items to improve the accuracy of recommendation results after getting the similar users.Experimental results show that,compared with the collaborative filtering recommendation algorithm based on Locality Sensitive Hashing(LSH),this algorithm has higher efficiency and recommendation accuracy.
  • YANG Hao,LIN Xijun,QU Haipeng
    Computer Engineering. 2017, 43(2): 79-84. https://doi.org/10.3969/j.issn.1000-3428.2017.02.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing Top-k query algorithms are mainly applied in the centralized relational database.However,the algorithms will cause huge communication costs and low efficiency in the distributed networks.In order to solve these problems,an improved Top-k Query Algorithm is proposed.This algorithm sets a Pretreatment Index Table(PIT) to cut the independent data out in the distributed networks,builds candidate subset which contains the correct Top-k results and realizes Top -k query based on it.Experimental result shows that the query results of this algorithm are more accurate,and it has shorter operation time and less network overhead compared with Fagin and Naive Top-k query algorithms.
  • YAO Min,YIN Jianwei,TANG Yan,LUO Zhiling
    Computer Engineering. 2017, 43(2): 85-91. https://doi.org/10.3969/j.issn.1000-3428.2017.02.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In big data scenarios,traditional data deduplication backup system faces with defects like large data backup storage space,insufficient data throughput and so on.Aiming at these defects,this paper designs a distributed backup data dedeplication system based on data routing.It uses data chunk as deduplication granularity,whose functions involve data routing and data prefetching.Data routing uses the Bloom filter to query data chunks to be processed,and applies average sampling and neighbor sampling based on Jaccard distance to prefetch data chunks.This system uses data routing to assign data chunks to the corresponding processing nodes to deal with.Data chunks’ hash code obtained through average sampling provides routing information for data routing.And data chunks’ hash code obtained through neighbor sampling is used for the first data deduplication of the system.Experimental results show that the data throughput of this system increases significantly compared with all processing node query and fixed data routing,while maintaining the deduplication ratio.
  • WANG Jijun,CHENG Hua
    Computer Engineering. 2017, 43(2): 92-97,104. https://doi.org/10.3969/j.issn.1000-3428.2017.02.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to get the GPU power data quickly and accurately,this paper proposes a General Purpose Graphics Processing Unit(GPGPU) power estimation model based on hardware performance counting events.Through analysing power distribution during GPGPU program running,it selects a set of performance events which are closely related to application program running power.Then it figures out the relationship between hardware performance counting events and realtime power using Back Propagation Atificial Neural Network(BP-ANN).Finally,it builds a GPGPU power estimation model.Experimental results indicate that compared with the Multiple Linear Regression(MLR) power estimation model,the proposed model has higher estimation accuracy and versatility.
  • ZHANG Jiaqi,SHEN Jianliang,ZHU Ke
    Computer Engineering. 2017, 43(2): 98-104. https://doi.org/10.3969/j.issn.1000-3428.2017.02.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional timing-driven Field Programmable Gate Array(FPGA) placement algorithm has some degree of error when calculating timing cost.Some timing-driven algorithms achieve better placement quality with a sacrifice of time.To deal with this problem,this paper proposes a timing-driven parallel algorithm TM_DCP based on transactional memory.TM_DCP distributes block swaps into multiple threads,and then uses Transactional Memory (TM) mechanism to ensure the legality of shared memory accesses.An improved timing-driven algorithm is also added in transactions.Experimental results show that compared with Versatile Place and Route(VPR),TM_DCP with 8 threads decreases the Critical Path Delay(CPD) by 4.2% on average with relatively small increase of total wire length.It also achieves 1.7 times speedup,and scales well with the increasing of threads.
  • YIN Yingying,YU Zhiyi
    Computer Engineering. 2017, 43(2): 105-110,119. https://doi.org/10.3969/j.issn.1000-3428.2017.02.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For data-intensive applications,the large amounts of energy and latency spent in transporting data between off-chip memory and on-chip computing elements cause a limitation referred to as the von Neumann bottleneck.Even in the 2.5D integrated system,the bottleneck also exists.Aiming at this problem,this paper proposes a novel hardware acceleration framework that enables computing in off-chip memory array for 2.5D system.It divides the memory into multiple banks and puts an accelerator designed for H.264 decoder in the memory to utilize the high bandwidth provided by the memory array.Simluation result shows that,compared with traditional software implementation method,this framework achieves 7.1X improvement in performance and 80.5% reduction in energy consumption,and it only increases 2% accelerator area.
  • LI Zhimin,YIN Beibei,ZHANG Ping,WANG Jibing,WANG Bin,ZHANG Jinpeng
    Computer Engineering. 2017, 43(2): 111-119. https://doi.org/10.3969/j.issn.1000-3428.2017.02.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    To meet strong real-time demand of flight control software,a real-time fault localization method for embedded software is proposed,and a visualization tool is developed to improve the automation degree of this method.A real-time fault localization model is established to define the suspiciousness on the two levels of software module and function respectively.While computing the suspiciousness,the difference between the actual execution time of modules and the criterion time,the difference between the function execution time in successful test cases and failure ones,and the call relations between modules and functions are used to give real-time fault localization algorithms on the module level and the function level.Simulation results indicate that modules and functions containing faults tend to have higher suspiciousness,which demonstrates the effectiveness of the proposed method.

  • ZHOU Zheng,LIU Yongzhi,SONG Jinlong,MA Weimin,WANG Zhenliang
    Computer Engineering. 2017, 43(2): 120-123,130. https://doi.org/10.3969/j.issn.1000-3428.2017.02.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the reliability of the Switch Reluctance Generation(SRG)system,several common failures of power converter are analyzed.Combined with the superiority of the independence of asymmetrical bridge power converter,a kind of fault-tolerant power converter is devised.The fault-tolerance power converter makes use of the free-phase power tubes to replace the fault-phase power tubes.A hardware platform and a simulation platform of fault-tolerant switch reluctance starter/generator system are established to validate.Simulation and experiment results show that fault-tolerant power converter has great fault-tolerant performance and the maximum amplitude of the output voltage is less than 6 V which meets the requirements of relevant standard.

  • WU Mingjie,CHEN Qingkui,YI Meng
    Computer Engineering. 2017, 43(2): 124-130. https://doi.org/10.3969/j.issn.1000-3428.2017.02.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Software Defined Network(SDN) technology based on OpenFlow can significantly reduce the communi-cation load between control plane and data plane through setting on effective buffer model in the OpenFlow switch.But the buffer model for the whole data flow causes time delay for data stream transmission,which reduces data transmission performance of the entire SDN network.For this problem,this paper introduces a flow buffer model named PiBuffer and constructs a buffer model based on packet grouping granularity.Through establishing buffer information of flow route and flow state in the control plane,it uses the mechanism of “packet buffering,order preserving in packet” for flow messages,and “ask to transfer,inform of completion” for data transfer between the switches,and optimizes communication messages between the control plane and data plane to improve the communication performance of data center network.Through software simulation,it is proved that the proposed model has better communication performance than flow buffer model in the data center SDN network based on OpenFlow technology.
  • MA Xiaoxia,LI Wenxin,JIN Tian,ZHAO Yanrong,XIA Jiagao
    Computer Engineering. 2017, 43(2): 131-136. https://doi.org/10.3969/j.issn.1000-3428.2017.02.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    When designing terminal instrument for manned spacecraft,the floating-point nonlinear operation of processing algorithms is achieved by library function.But the speed is always very slow,when software is used to implement nonlinear operations,which limits the application of many floating-point algorithms.A multicore parallel method based on FPGA is presented for the purpose of processing nonlinear functions and low run speed.It makes full use of the advantage which FPGA’s parallel internal architecture to improve nonlinear operation speed.Simulation experimental results that the method can calculate nonlinear functions within the scope of limited definition domain and improve floating-point operation speed effectively.

  • WANG Lian,ZHU Ke,ZHAO Bo
    Computer Engineering. 2017, 43(2): 137-143. https://doi.org/10.3969/j.issn.1000-3428.2017.02.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Network on a Chip(NoC) typically uses the input output buffer or cross switch buffer to store the microchip.Although this improves the performance of router,it consumes many resources and significantly increases the power consumption.For this problem,bufferless router is proposed.Because of the existence of the inefficient deflection of microchips in bufferless router,it is not suitable for medium and high load networks.Hence,this paper proposes a new mini-buffered router with low deflection rate based on directional vector routing strategy.It uses a bypass register and a loopback register,and uses the maximum bipartite graph matching scheduling algorithm to optimize the microchip routing.Simulation and synthesis on Xilinx’s Vivado show that the performance of this router is quite similar to that of RIDER router,but the register usage is reduced by 55%,and the performance is better than that of CHIPPER,MinBD and RIDER in high load network.
  • XIE Fan,LI Yong,LIU Jiaqiang,SU Li,JIN Depeng
    Computer Engineering. 2017, 43(2): 144-149. https://doi.org/10.3969/j.issn.1000-3428.2017.02.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that present services are complicated and rigescent in the mobile network,this paper proposes an interaction mechanism based on Network Function Virtualization(NFV) and Software Defined Network(SDN).The system can achieve the complex services’ deployment and dynamic updates of the service chain and matched policy by integrating the information get from the control plane,completing the physical mapping from logic rules to forwarding flows.It meets the demand for Quality of Service(QoS) based on the communication mode of service chain.Experimental result shows that the interaction mechanism proposed can achieve the efficient and rapid deployment of service chain which satisfies the mobile network requirements of flexibility and scalability.
  • LI Tian,SUN Shaohui,LI Hui
    Computer Engineering. 2017, 43(2): 150-155. https://doi.org/10.3969/j.issn.1000-3428.2017.02.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the additional degrees of freedom in the vertical dimension cannot be fully utilized,a partial Kronecker product based dual-stage codebook is proposed.In the dual-stage codebook,the first stage determines a beam group and identifies the approximate range of the user and describes the long-term/wideband properties of the channel.The second stage performs column selection from the beam group and quantizes the co-phasing between two polarizations.Accordingly,the selected beam improves the beam-steering accuracy of the user,the short-term/subband information of channel may be tracked by the second stage.The dual-stage codebook can make a good tradeoff between the feedback overhead and the accuracy of channel quantization in the massive Multiple-input Multiple-output(MIMO) systems.Simulation results show that the proposed codebook scheme achieves significant performance gain in terms of the whole spectral efficiency and throughput.
  • ZHANG Yijiang,YU Jinsen,HAO Ping
    Computer Engineering. 2017, 43(2): 156-162. https://doi.org/10.3969/j.issn.1000-3428.2017.02.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to solve the problems such as complex topology computation and low positioning accuracy induced by random error factors in the localization process of Wireless Sensor Network(WSN) nodes,a new node localization algorithm for WSN is proposed.Firstly,the received anchor node information is evaluated and the position of the node to be localized is preliminary estimated according to the linear parameter evaluation mechanism.Secondly,the linear parameter value is calculated for searching the anchor triangle.Finally,the precise node position is obtained by using the weighted evaluation method which uses the triangle location parameters of different anchor nodes for multiple weighted average calculation.Simulation results show that this algorithm has higher localization accuracy and lower error compared with DV-hop algorithm and DV-distance algorithm.

  • YANG Jie,CHEN Rui,GUO Lihong,RUI Xiongli
    Computer Engineering. 2017, 43(2): 163-170. https://doi.org/10.3969/j.issn.1000-3428.2017.02.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper studies the Energy Efficiency(EE) of Decode-and-Forward One-Way Relay Transmission(DF-OWRT) and Decode-and-Forward Two-Way Relay Transmission(DF-TWRT) systems under strict delay constraint.It maximizes the EE by jointly optimizing the transmission time and the transmit power.By expressing the transmit powers as the functions of the transmission time through the channel capacity and Signal-to Noise Ratio(SNR) expression,it turns the joint optimization problem to an optimization problem with two decision variables.It uses the theory of extreme values of multivariate functions and gradient descent algorithm to find the optimal transmission time which can minimize the energy consumption,and gives the algorithm of the energy consumption minimum.Simulation results show that DF-TWRT mode is fit for the application situation of rate asymmetry,DF-OWRT is fit for the situation of the terminal with mobility.
  • ZHUANG Ling,MA Long
    Computer Engineering. 2017, 43(2): 171-175,182. https://doi.org/10.3969/j.issn.1000-3428.2017.02.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The interference in Cognitive Radio(CR) resource allocation introduced by the Secondary User(SU) to the Primary User(PU) derives from two aspects:Out-of-Band Leakage(OOBL) and Spectrum Sensing Error(SSE).Filter Bank Multicarrier (FBMC) has small OOBL and high spectral efficiency in comparison with Orthogonal Frequency Division Multiplexing(OFDM).Full consideration of interference sources can reduce the interference from SU to PU and improve CR system throughput.So this paper proposes a resource allocation algorithm considering SSE in CR network,establishes the interference model,decomposes resource allocation into subcarrier allocation and power allocation,and allocates the power to SU under both interference constraint and power constraint.Simulation results based on FBMC and OFDM show that the proposed algorithm causes less interference to PU.The CR system can obtain greater throughput,and the performance of FBMC in interference and throughput is better than that of OFDM.
  • YANG Qi,ZHANG Xi,WANG Ping
    Computer Engineering. 2017, 43(2): 176-182. https://doi.org/10.3969/j.issn.1000-3428.2017.02.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of key tracking in cloud storage,a new attribute-based encryption scheme is proposed based on outsourcing decryption.By adding the key factor to the key,the decryption record table T of all users is generated.Data owner can monitor the decryption behavior of users at any time and provide the reference factor for detecting users maliciously spreading the decryption key.By querying the table T,data owner can quickly detect whether the key is valid for the encrypted file and obtain so user identity associated with the key.At the same time,most of the decryption operations are moved to the cloud decryption server,users only need an index operation to recover the plaintext,thus reducing the decryption work in user clients.Analysis results show that the scheme can meet the requirements of secure,efficient and traceable key management in cloud storage.
  • YANG Xiaodong,GAO Guojuan,ZHOU Qixu,LI Yanan,WANG Caifen
    Computer Engineering. 2017, 43(2): 183-188. https://doi.org/10.3969/j.issn.1000-3428.2017.02.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the security problem of e-government data exchange,a double trapdoor Hash function is proposed by using discrete logarithm assumption.Moreover,this function is proved to satisfy the properties,such as validity,trapdoor collision,collision resistance,key compromise resistance and so on.Combining the proposed Hash function with existing proxy re-signature schemes,a new secure e-government data exchange scheme is presented,and the security of this scheme can be reduced to the security of the trapdoor Hash function and the underlying proxy re-signature scheme.Analysis results show that the proposed scheme can effectively reduce the cost of re-signature generation and re-signature verification,enhance real-time data exchange,and reduce storage space.Therefore,the proposed scheme is suitable for devices with limited computing resources.
  • LI Mengzhu,ZHANG Wenying,CHEN Wanpu
    Computer Engineering. 2017, 43(2): 189-193. https://doi.org/10.3969/j.issn.1000-3428.2017.02.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper analyzes ZORRO with algebraic method in detail according to its iterative characteristic of the round function.The encryption iterates 5 times based on 4-round differential characteristics to form 20-round differential characteristics,and then it uses the algebraic analysis method to analyze the iterative differential characteristics of ZORRO,substitutes the collected data into simple equations,finally tries to solve the equations.Experimental results show that the algebraic method is more intuitive and can recover the key efficiently.
  • SUN Mingsong,HAN Qun
    Computer Engineering. 2017, 43(2): 194-200,205. https://doi.org/10.3969/j.issn.1000-3428.2017.02.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to realize the communication detection of the Advanced Persistent Threat(APT),this paper presents a detection method for server-side and host-side log data.It makes the establishment of IP address database and uses DBSCAN clustering algorithm to collect and deal with the massive log data to get abnormal communication log.The abnormal communication log is detected by using Latent Dirichlet Distribution(LDA) modeling of the 14 communication features of APT.Experimental results show that LDA modeling improves the efficiency and accuracy of APT communication detection compared with Latent Semantic Analysis(LSA) and Probabilistic Latent Semantic Analysis(PLSA) detection models.
  • CHEN Jinfei,XU Xin
    Computer Engineering. 2017, 43(2): 201-205. https://doi.org/10.3969/j.issn.1000-3428.2017.02.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Android devices usually use digital or graphical password for unlocking screen,but such form of password is not safe enough,and some versions of Android exist keyguard bypass vulnerability.Therefore,this paper designs an Android keyguard system which uses user’s unique voiceprint feature to unlock screen.This paper uses Mel Frequency Cepstrum Coefficient(MFCC) as the voiceprint feature and Dynamic Time Warping(DTW) for voiceprint pattern matching,and uses Android NDK technology to achieve quick voiceprint recognition.Experimental results show that this voiceprint keyguard system has a good unlocking success rate and unlocking speed,and compared with digital or graphical unlocking,unlocking screen with voiceprint is safer and gives users a better experience.
  • AN Jianrui,ZHANG Longbo,WANG Lei,JIN Chao,HUAI Hao,WANG Xiaodan
    Computer Engineering. 2017, 43(2): 206-209. https://doi.org/10.3969/j.issn.1000-3428.2017.02.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Since the existing OPTICS algorithm is time-consuming,highly complex and unsuitable for data-intensive environments,this paper brings about an improved algorithm based on grid and weighted information entropy.It firstly divides the data set into a number of grid cells,and then introduces the weighted information entropy concept to the divided grid units.By calculating the weighted information entropy,it self-adaptively computes the minimum density threshold for each grid cell.For the grid cells that satisfy the minimum density threshold,a dense grid concept is proposed to compress data points by replacing gridded data sets with centroid points.Finally,the GeoLife Trajectories dataset is employed to test the algorithm performance,and the validity of the improved algorithm is proved by both theoretical analysis and experimental results.
  • WANG Youwei,ZHU Jianming,FENG Lizhou,LI Yang
    Computer Engineering. 2017, 43(2): 210-214. https://doi.org/10.3969/j.issn.1000-3428.2017.02.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the traditional Fruit Fly Optimization Algorithm(FOA),the new positions of fruit flies are often limited to particular regions,thus the optimization results are highly dependent on the searching radiuses,and are easy to fall into local optimum.On this basis,an improved FOA algorithm is proposed.The search scope of the fruit fly in each dimension is divided into two parts,and the conceptions of near suburb and far suburb are introduced.A local optimum oriented factor is introduced,and the searching intensities of different scopes are coordinated by adjusting this factor.The fruit fly position is updated by randomly selecting a specific dimension of it.Simulation results show that,when compared with traditional adaptive chaos FOA methods,the proposed method can avoid the effect of searching radius effectively,and has evident advantages on convergence accuracy and convergence speed.
  • WEI Xiaocong,LIN Hongfei
    Computer Engineering. 2017, 43(2): 215-219,226. https://doi.org/10.3969/j.issn.1000-3428.2017.02.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The inconsistency between source domain and target domain feature spaces results in accuracy decline of transfer learning.To resolve this problem,this paper proposes a different domain feature alignment method based on Word2Vec.Adjectives,adverbs,nouns and verbs are selected as features.Pivot feature is selected from source domain and target domain for every part of speech.The most similar non-pivot feature is calculated for each pivot feature respectively from source domain and target domain as similar pivot feature.Then similar pivot feature pairs are constructed accordingly.Every similar pivot feature appearing in both domains is transformed according to similar pivot feature pairs.Consequently,the features which represent similar semantic information are aligned.Machine learning is performed on source domain and target domain data after feature transformation.Experimental result shows that the average accuracy of the proposed algorithm is 88.2%,higher than Baseline algorithm.
  • LIANG Tingting,LI Chunqing,LI Haisheng
    Computer Engineering. 2017, 43(2): 220-226. https://doi.org/10.3969/j.issn.1000-3428.2017.02.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the polysemous words and synonyms of text processing in the online education support technology,a Top-k learning resource recommendation algorithm based on content filtering PageRank is proposed.A learning resource filtering recommendation model is constructed based on content vector space filtering.The model pays attention to resource matching mode to replace the semantic similarity,so as to avoid missing detection of polysemous words or synonyms.Google PageRank algorithm is combined with the aforementioned resource matching model to construct weight matrix considering the relationship between resources.This is used to replace the hyperlink mode between Web pages of the traditional PageRank algorithm for resource type dividing.The Markov convergence matrix of characteristics is constructed,and the Top-k algorithm is used to refine the recommended results.Experimental results show that the proposed algorithm is feasible for the computation time cover rate in the public learning resource dataset.
  • WANG Hanbo,SUN Qilin
    Computer Engineering. 2017, 43(2): 227-233,240. https://doi.org/10.3969/j.issn.1000-3428.2017.02.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The complex ontology matching methods are mainly classified into correspondence pattern based methods and machine learning based ones.The former are heuristic ones that are designed with some knowledge about the ontologies to be aligned and the latter are easy to fall into local optimums.This paper proposes a novel complex ontology matching method which benefits from both correspondence pattern and machine learning.The key of the method is the introduction of path feature to characterize the information of instances and path feature is an instantiation of correspondence pattern.Path features are combined by First Order Inductive Learner(FOIL) to acquire complex mappings.Experimental results show that this method automatically learns complex correspondences between ontologies,and effectively relieves the problem of local optimum compared with FOIL-based complex matching method.
  • LIU Jinwen,XU Jing,ZHANG Liping,RUI Weikang
    Computer Engineering. 2017, 43(2): 234-240. https://doi.org/10.3969/j.issn.1000-3428.2017.02.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to extract personal relations of high reliability from the mass network information,the semi-supervised learning algorithm based on label propagation can improve the performance of relation extraction under small amount of labeled data,but randomly selecting training sample may cause the reduction of the relation extraction performance.This paper combines label propagation algorithm and active learning so as to extract the relationship between the characters.In the training data acquisition,the maximum uncertainty of the sample is actively selected for label.Experimental results on personal relation show that the active learning method improves the average F1 by 2.3% than label propagation algorithm.
  • GU Ping,YANG Yang
    Computer Engineering. 2017, 43(2): 241-247. https://doi.org/10.3969/j.issn.1000-3428.2017.02.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The distributions of the minority class samples in the imbalanced data set are discrepant.Traditional oversampling algorithms do not dispose this discrepancy.In order to handle this discrepancy,this paper proposes an oversampling algorithm oriented to subdivision of the minority class samples in the imbalanced data set,named SD-ISMOTE.This algorithm divides minority class samples into three subdivisions according to the distributions of their k-nearest neighbor,the three subdivisions are DANGER,AL_SAFE,SAFE.Samples in DANGER and AL_SAFE are closer to the decision boundary.The algorithm uses ISMOTE idea to make random interpolation in the n-dimensional ball space,expanding the sampling range of those samples in DANGER and AL_SAFE.Besides,in order to avoid redundancy,it leads the roulette into SD-ISMOTE.Experimental results show that SD-ISMOTE algorithm improves the imbalanced degree of the imbalanced data set distribution effectively,compared with Borderline-SMOTE and ISMOTE algorithms,and it brings better classification performance on imbalanced data set with C4.5 and naive Bayesian.
  • TANG Yonghe,JIANG Liehui,HOU Yifan,WANG Ruimin
    Computer Engineering. 2017, 43(2): 248-251,256. https://doi.org/10.3969/j.issn.1000-3428.2017.02.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    For the low contrast between ridges and valleys,shake blur and other issues of fingerprint images obtained with digital cameras,a contactless fingerprint image enhancement algorithm based on Retinex theory and Short Time Fourier Transform(STFT) analysis is proposed.According to Retinex theory,the non-uniformity effect of the illumination is eliminated by using the relative brightness and darkness of each pixel in the image,and the contrast between ridges and valleys is increased,which enhances the contactless fingerprint image initially.The method of STFT analysis is used to extract the texture information of the fingerprint image,such as orientation map,frequency map and so on.Then the angle filter and radius filter are built to filter the results of initial enhancement in the frequency domain,which further improves the image contrast while effectively suppressing the interference of noise.Experimental results show that the present algorithm has a better performance on contactless fingerprint image enhancement,which can not only effectively improve the contrast between ridges and valleys of fingerprint,but also maintain the continuity of the ridge better,and its average speed is a little faster than the method combining homomorphic filtering and Gabor filtering.

  • TONG Lijing,ZHENG Junchao
    Computer Engineering. 2017, 43(2): 252-256. https://doi.org/10.3969/j.issn.1000-3428.2017.02.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To overcome the defects of large amounts of calculation and long duration time consuming and calculation amount of current point cloud surface boundary extraction algorithms,a point cloud surface secondary boundary extraction algorithm is put forward.First of all,the space bounding box method is used to divide the 3D model into several sub cubes evenly.Every cloud point is put in a small cube.The boundary sub cubes are identified by the number and distribution of the sub cubes which has any cloud point.Then,according to the distribution of the data points,every point that is in a boundary sub cube and its K neighbor points are projected to a flat surface.At last,the boundary point is identified according to the principle that the angle between one axis and the vector formed by projected point and center point meets the preset conditions.Experimental results show that the secondary extraction algorithm for scattered point cloud surface boundary can save time and improve precision.
  • ZHANG Zhengben,CAI Pengfei,SUN Ting
    Computer Engineering. 2017, 43(2): 257-263. https://doi.org/10.3969/j.issn.1000-3428.2017.02.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As most shadow detection methods are limited by penumbral shadows and cannot process umbra shadow well,a top-down dynamic shadow detection and tracking method based on Kalman Filtering(KF) is proposed.Firstly,contour information of the target is obtained by gradient information,and foreground segmentation is also improved.Then,the similarity of texture and spatial similarity of brightness distortion for each potential shadow are analyzed.Finally,in the frame of data association,time consistency between target and shadow is used to increase shadow detection rate,combining with KF.Experimental results on several data sets show that the proposed method is stable and efficient.Compared with geometry information method,color space difference method and multi-level method,it has higher average shadow identification rate.
  • LUO Lan,DU Qinsheng
    Computer Engineering. 2017, 43(2): 264-267. https://doi.org/10.3969/j.issn.1000-3428.2017.02.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper,an image fusion quality assessment algorithm based on local variance is proposed to predict the distortion on visual information.Considering that multi-scale representation technology allows for image analysis in different scale,multi-scale image representation is used to accurately evaluate the performance of image fusion algorithm.The proposed algorithm contains three steps,multi-scale image decomposition,measuring similarities of source image and fused image layer by layer,and merging all similarities to get the final score.Experimental results of parameter selection and fusion algorithms comparison show that the evaluation result of the proposed algorithm is consistent with the subjective evaluation and has high reliability.
  • ZHU Xiantan,SHI Fanhuai
    Computer Engineering. 2017, 43(2): 268-272,279. https://doi.org/10.3969/j.issn.1000-3428.2017.02.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the robustness and accuracy of the long-term object tracking,this paper proposes an improved Tracking Learning Detection(TLD) method.Tracking points set consists of a few BRISK keypoints with scale-invariant feature and uniformed-distributed points.Tracking points set is used to replace the uniformed-distributed points in TLD to reduce the computation of tracking part and improve the robustness of tracking.The tracking range is extended and the track is taken again by using the spatial context information of the target when forward-backward error meets the condition of occlusion,and then solves the problem of occlusion.Experimental results show that the improved TLD method has better tracking performance on several test sequences,and outperforms the traditional TLD in terms of robustness and accuracy.
  • ZHANG Jingjing,NIE Hongyu,YU Qiang
    Computer Engineering. 2017, 43(2): 273-279. https://doi.org/10.3969/j.issn.1000-3428.2017.02.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the existing problem of the low precision in crack detection of the surface of railway concrete bridge,a novel bridge crack detection approach based on percolation model with multi-scale input image is proposed.Firstly,weighted piecewise function is employed to enhance contrast ratio,and the optimal threshold segmentation is adopted to largely filter non-crack region.Secondly,different Gaussian kernels are used to get different scales of the input image.Thirdly,multi-scale images of concrete bridge are put into the percolation model to generate high accuracy binary map including only crack information.Finally,the crack information,such as area,length and maximum width,is extracted by the gradients of these cracks on this binary map.Experimental results demonstrate that the proposed approach can improve detection accuracy and stability.
  • CHEN Junjun,XU Bing
    Computer Engineering. 2017, 43(2): 280-285,292. https://doi.org/10.3969/j.issn.1000-3428.2017.02.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at restrictions of machine vision image under fog and haze weather conditions,a machine vision image defogging algorithm is proposed based on dark channel prior.It is based on atmospheric scattering model and dark channel prior principle.It uses a new method to estimate the atmospheric brightness by specifying a gray zone in the dark channel chart of fog image and then selects the brightness of the highest frequency as the atmospheric brightness.Then it converts the fog image is converted to grayscale image and the histogram equalization method is used to enhance it.So that the structural information contained in the fog image can be shown as much as possible.The running time is reduced by using guided filtering to optimize the transmission.Experimental results show the proposed algorithm can get better defogging effects and improve operational efficiency.
  • CHEN Mingjian,LIN Wei,ZENG Bi
    Computer Engineering. 2017, 43(2): 286-292. https://doi.org/10.3969/j.issn.1000-3428.2017.02.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of how to detect and construct the indoor environment map with an optimal detection route by the intelligent robot,a path planning algorithm based on rolling window is proposed.The detection strategy of unknown environment is improved from the traditional traversal pattern of cattle farming.Combined with the rolling window,the rolling exploration and map building path planning of the unknown environment is realized.Meanwhile,the A* algorithm is used to plan the local path in the rolling window and the escape path when the robot enters a dead end.Simulation results based on different algorithms show that the proposed algorithm can effectively reduce the map building nodes,shorten the map building path,and enable the robot to build the environment map building more quickly and efficiently.
  • ZHANG Cheng,HE Jian,ZHANG Yan,ZHOU Mingwo
    Computer Engineering. 2017, 43(2): 293-298,303. https://doi.org/10.3969/j.issn.1000-3428.2017.02.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the accuracy rate of fatigue driving detection system based on Electroencephalogram(EEG) signal running is not high on small size,low-powered wearable devices,on the basis of data relation analysis between Attention,Meditation and Blink of subject’s left prefrontal brain electrical signal,the best window width and classification algorithm is selected.This paper designs fatigue driving detection algorithm suitable for wearable devices.And the system is implemented on the Android intelligent devices.The accuracy rate,true positives rate,false positives rate,sensitivity and specificity are used to measure the performance of four kinds of algorithm:k-nearest neighbors,decision tree,naive Bayes,multi-layer artificial neural network.kNN is chosen to implement system.Experimental results show that the accuracy rate of the system reaches 83.7%,sensitivity and specificity are 73.8% and 88.6%.The system is wireless,real-time,accurate and efficient.
  • ZHAO Limin,ZHU Xiaojun
    Computer Engineering. 2017, 43(2): 299-303. https://doi.org/10.3969/j.issn.1000-3428.2017.02.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at identification problem of motor imagery Electroencephalogram(EEG) signals,this paper proposes an improved feature extraction and classification method of EEG signals.The original signal is decomposed into a series of Product Function(PF) components by Local Mean Decomposition(LMD) and meaningless PFs are eliminated according to the EEG signals within the scope of μ rhythm and β rhythm.With the principle of characteristic time selection,motor imagery signals of 4 s~6 s are selected as classification data.Then the sum of sample entropy for the second and third-order PFs of C3 and C4 lead signals are calculated respectively and their mean values MSampEn(C3,C4) can be used as elements of EEG feature vector,which is classified with Support Vector Machine(SVM) to recognize imagery movements.Experimental results indicate that the proposed feature extraction method which has higher classification accuracy than Empirical Mode Decomposition(EMD) and Ensemble Empirical Mode Function(EEMD) method.
  • LUO Yong,GUO Yamo,LIU Chong
    Computer Engineering. 2017, 43(2): 304-307,316. https://doi.org/10.3969/j.issn.1000-3428.2017.02.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to increase the accuracy of the fuzzy model,a fuzzy classification model based on firework algorithm and Pareto optimal solution set is proposed.The fuzzy clustering method is applied to build the initial fuzzy model,and the structure and para meters of the model are optimized by the firework algorithm.In each iterative operation process,the concept of the fast non dominated sorting algorithm and the Pareto optimal solution set is used to evaluate and select the sub generation.Simulation results on Wine data sample set demonstrate that the proposed method can build fuzzy classification system of simple structure which is easy to understand under the premise of ensuring higher classification accuracy.
  • LIU Weiming,GAO Xiaodong,MAO Yimin,ZHOU Zhaofei
    Computer Engineering. 2017, 43(2): 308-316. https://doi.org/10.3969/j.issn.1000-3428.2017.02.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Since the rainfall and other uncertainties are difficult to obtain and effectively deal with in landslide hazard prediction,and the existence of local minima and training slow in the standard back propagation algorithms,in order to improve the prediction accuracy,this paper proposes an uncertainty genetic neural network landslide prediction method.Based on modified genetic algorithm and back propagation neural network classification algorithm,combined with the landslide disaster prediction theory,taking into account the rainfall and other uncertainties in landslide,this paper proposes the concept of separation of uncertain data,elaborates the processing methods of uncertain property data,builds uncertain genetic neural network and the landslide hazard prediction model.It also selects Baota district of Yan’an study area to verify this method.Experimental results show that the effective accuracy and the overall accuracy of the proposed method are 92.1% and 86.7% respectively,which verifies the feasibility of uncertainty genetic neural network algorithm in landslide hazard prediction.
  • DING Zhiguo,DING Li,TANG Hongfei
    Computer Engineering. 2017, 43(2): 317-321. https://doi.org/10.3969/j.issn.1000-3428.2017.02.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Radio Frequency Identification(RFID) system,in order to overcome the problem that the reservation slot cannot be effectively used in the tag reservation anti-collision algorithm,this paper proposes a new anti-collision algorithm.In the new algorithm,each round of the tag identification process is divided into two steps which are tag reservation phase and tag reading phase.In the tag reservation phase,the tag to be recognized randomly selects the first part of an appointment slot to send tag serial number,named the short random sequence of the fixed length sent by the traditional tag reservation algorithm.The number of reservation slot is dynamic adjusted according to the estimated number of tags.According to the status of reservation slot,the reader notices these tags which choose the readable slots continue to send the remaining part of the serial number in tag reading phase.Simulation results show that the algorithm uses dynamic reservation mechanism,which reduces redundant information transmission,can effectively reduce the number of collision and idle slots.