Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 September 2016, Volume 42 Issue 9
    

  • Select all
    |
  • YU Chenglong,WANG Yongwen
    Computer Engineering. 2016, 42(9): 1-4. https://doi.org/10.3969/j.issn.1000-3428.2016.09.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Single Instruction Multiple Data(SIMD) is an effective approach to realize data level parallelism,but accessing unaligned data seriously affects vectorization of the program and causes processor performance degradation.In order to reduce the latency of unaligned memory access,the memory access structure of high-performance application programs is modeled.SIMD unaligned memory access structure which buffer line is splited and the memory unaligned memory access structure of dual cache are designed and implemented.Under memory unaligned memory access structure of dual cache,experimental results show that for addition of two arrays and SIMD vectorization,the performance of unaligned code is 99% of aligned code.The memory access efficiency of SIMD vectorization is improved.
  • WANG Ting,DONG Qiwen,FAN Feifei
    Computer Engineering. 2016, 42(9): 5-14. https://doi.org/10.3969/j.issn.1000-3428.2016.09.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Dynamic consolidation of Virtual Machines(VM) is a means of resource management in cloud data centers.By using live migration of virtual machines for those overloaded or underloaded Physical Machines(PM) and switching idle nodes to the sleep mode with the guarantee of Quality of Service(QoS) at the same time,it helps to improve the utilization of resources and energy efficiency in cloud data centers.It is an effective way to reduce energy consumption and realize green computing.This paper summarizes relevant literatures in recent years,and analyses sub problems including overload detection,underload detection,and destination host selection,etc.,and elucidates thoroughly the involved technique such as load prediction and optimization.It describes the circumstances for experimental verification and the evaluation of technical proposal,and discusses relevant research of theoretical problems and future research directions.

  • WANG Hongya,ZHANG Huaqing,LIU Xiaoqiang
    Computer Engineering. 2016, 42(9): 15-20. https://doi.org/10.3969/j.issn.1000-3428.2016.09.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Esper event processing system can be used for complex event processing and data analysis,and it is suitable for handling a large number of historical or real-time news and event stream.This paper analyzes the data stream processing system based on Esper engine under multi-core computing platforms.It describes the design and implementation of experimental platform,designs complete query statements and test cases,and uses the experimental platform to test the performance of Esper engine under the multi-core computing platforms. Real-time monitoring and offline data analysis are used to test the system performance.Experimental results show that Esper data stream processing system can not provide better support under the multi-core platforms.
  • QU Xi,HUANG Huimin,ZHANG Ning,YU Guoqiang
    Computer Engineering. 2016, 42(9): 21-25,32. https://doi.org/10.3969/j.issn.1000-3428.2016.09.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at features of the upper stage rocket computer,such as strong real-time property,high reliability and space radiation resistance,a new computer technology based on redundanly and reconfiguration is proposed.This paper describes the key technologies of the upper stage rocket computer,including the computer architecture,working mode,redundancy reconfiguration,self-recovery,and anti-radiation.It designs the upper stage rocket computer by multi-redundancy,reconfiguration and self-recovery technologies,enabling the computer to tolerate double redundancy fault and improving the reliability of the computer in space nvironment.Experimental result shows that the upper state rocket computer can tolerate double redundancy fault and detect the fault effectively.The self-recovery time is less than 8 seconds.
  • ZHANG Minmin,ZHANG Yun,DUAN Yuanxin
    Computer Engineering. 2016, 42(9): 26-32. https://doi.org/10.3969/j.issn.1000-3428.2016.09.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Through the analysis of multiple of distributed architectures of Software Defined Network(SDN),a new architecture named the Distributed Control Architecture(DCA) is proposed based on SDN.DCA separates the control layer into load balancing layer and control system layer.It runs load balancing algorithm with several load balancers,which can effectively avoid the problem that load balancer becomes a new bottleneck in network.As well as the usage rate of CPU and memory,the number of connection requests is also added into the load factor.In addition,based on dynamic feedback information,the load condition of each controller can be reflected accurately.Simulation results based on virtual machine and Mininet proves the feasibility of this architecture.
  • CHEN Xiaomin,SU Junxu,TAN Wei,ZHU Yimin,ZHU Qiuming
    Computer Engineering. 2016, 42(9): 33-37. https://doi.org/10.3969/j.issn.1000-3428.2016.09.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For Vertical-Bell Labs layered Space-time Orthogonal Frequency Division Multiplexing (V-BLAST) system with channel correlation and channel estimation error,this paper proposes an adaptive Transmit Power Allocation(TPA) algorithm to minimize Bit Error Rate(BER).A decorrelation method based on beam-forming technique is employed in the presence of deteriorative correlation.At the transmitter,the power allocation matrix is calculated by using the Lagrange multiplier method with the total TPA constraint condition.Simulation results show that the proposed TPA algorithm combined with the decorrelation method can improve the BER performance effectively for the actual communication system.With the increase of the signal to noise ratio,the improvement of the performance is more obvious.
  • XU Guangxian,XU Shanqiang,XU Chunyan,JIN Yubo,WANG Jingfu
    Computer Engineering. 2016, 42(9): 38-42. https://doi.org/10.3969/j.issn.1000-3428.2016.09.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The channel of wireless network is easily affected by the environment.The high packet loss rate is caused by the moving receiving node.In order to solve this problem,this paper proposes a retransmission algorithm combining Hamming weight with broadcast network encoding.It constructs the Hamming weight matrix,requires the encoding packets with Hamming weight before the loss packet retransmission,and codes the data packet to make retransmission.Simulation results show that compared with algorithm of the wireless network broadcast retransmission based on network coding and the efficient wireless broadcast retransmission based on binary network coding,this algorithm can reduce the wireless network retransmission times and computational overhead and improve the efficiency of the wireless network.
  • SUN Fashuai,JIN Jie,SU Hansong,LIU Gaohua
    Computer Engineering. 2016, 42(9): 43-47. https://doi.org/10.3969/j.issn.1000-3428.2016.09.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper proposes an adaptive downlink scheduling algorithm for LTE-Advanced(LTE-A) relay downlink based on the buffer data,aiming at the problem that existing scheduling algorithms just simply perform priority ranking and cannot adaptively adjust according to the requirements of system.This algorithm adjusts its instantaneous rate fraction on the basis of the Modified Largest Weighted Delay First(M-LWDF) algorithm.In order to make the weight of instantaneous rate parameter follow the buffer data of change and the algorithm adaptive adjust according to the requirements of system,the algorithm increases the index factor acquired by the quantization of buffer data on the basis of the original instantaneous rate fraction.Simulation results show that the proposed algorithm can improve the performance of service delay,spectrum effectiveness and fairness.It can also improve system throughput and reduce the packet loss rate.

  • GAO Hongyuan,LIANG Yansong,LIU Dandan
    Computer Engineering. 2016, 42(9): 48-51,57. https://doi.org/10.3969/j.issn.1000-3428.2016.09.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the optimal solutions for robust multiuser detection in the impulse noise environment,a Novel Quantum Bee Colony(NQBC) algorithm is proposed based on artificial bee colony algorithm and quantum computing.In NQBC,two new quantum foraging behaviors are used to realize the cooperation of NQBC and find the optimal nectar position with less time.A robust multiuser detector based on NQBC is designed in the presence of impulse noise,and simulation comparisons are made between multiuser detection methods based on Genetic Algorithm(GA),Quantum Genetic Algorithm(QGA) and Particle Swarm Optimization(PSO).Simulation results show that the proposed method can find the optimal solution and has a lower bit error rate.
  • Lü Tianhang,LIU Qinrang,ZHAO Bo
    Computer Engineering. 2016, 42(9): 52-57. https://doi.org/10.3969/j.issn.1000-3428.2016.09.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the 3D-Mesh topology inter-layer structure which is hard to deal with network traffic changing,an adaptive inter-layer structure for 3D-Mesh based on greedy algorithm is proposed.It measures network parameters in real-time,dynamically changes Through Silicon Via(TSV) working conditions,and adaptively transforms the most appropriate router-TSV mappings for current network.Experimental results show that compared with full accessed and partially accessed 3D-Mesh,this inter-layer structure is more reasonable in allocation of network resources,and it improves throughput and reduces latency.
  • NING Duobiao,ZHANG Bing
    Computer Engineering. 2016, 42(9): 58-62,70. https://doi.org/10.3969/j.issn.1000-3428.2016.09.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Data aggregation scheduling aims to find a feasible and efficient data aggregation scheme for Wireless Sensor Network (WSN).Previous algorithms on this problem usually construct data aggregation routing based on shortest-path-trees,which results in data aggregation plans with unacceptable latency.This paper presents a novel scheduling algorithm CGTA based on the Connected Dominating Set(CDS) theory.Through CDS,it divides the nodes into backbone nodes and ordinary nodes.Moreover,it uses the greedy strategy to construct maximal link transmission subsets while making scheduling decisions.Experimental results show that compared with SPTS and MWFS,CGTA algorithm can reduce the network latency by 15% under various scenarios.
  • SONG Youmei,LI Jianbo,HE Tianyue,XU Jixing
    Computer Engineering. 2016, 42(9): 63-70. https://doi.org/10.3969/j.issn.1000-3428.2016.09.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Delay Tolerant Network(DTN),due to the frequent network partitions caused by sparse node density and node mobility,there is not always a connected end-to-end path in the process of message delivery.Thus,the routing algorithms in DTN generally adopt a store,carry and forward mechanism so as to deliver a message from a source to a destination.For the above problems,a Similarity-based Probabilistic Routing(SBPR) algorithm is proposed,which combines the concept of node similarity with the delivery probability to the destination during the messages’ Time to Live(TTL).Both message replication and message forwarding strategies are adopted in SBPR.Concretely,on one hand,when a node holding messages encounters other nodes,it replicates the message to the neighboring node which has a smaller inter-node similarity,so as to increase the message delivery ratio.On the other hand,if a neighboring node has a larger node similarity and delivery probability to the destination node,it forwards the message to this neighboring node for the purpose of saving the network resource consumption.Expermental result shows that the proposed SBPR outperforms Epidemic,Prophet and First Contact(FC) routing algorithms in terms of delivery ratio,network overhead ratio and message dropping ratio in the case of insufficient node buffer.
  • ZHANG Yuncan,WANG Lei,LIU Jing,WANG Song
    Computer Engineering. 2016, 42(9): 71-75,82. https://doi.org/10.3969/j.issn.1000-3428.2016.09.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Content-centric Network(CCN) provides a solution to solve the problem of content acquisition and distribution in the view of network architecture.The finiteness of network resource and unexpectedness of network traffic result in network congestion inevitably.The mainstream implementation of CCN still relies on IP routing,so the study of congestion control scheme in CCN now cannot reflect the value of the research which can only superimpose over the congestion control scheme of IP.Protocol-oblivious Forwarding(POF) is an extension of the OpenFlow protocol,which can support to forward packets in any format.In view of this feature,this paper realizes a prototype of CCN based on POF,named Software Defined Content Network(SDCN).Then a joint congestion control strategy is proposed in SDCN.Experimental results show that this strategy can effectively control congestion in SDCN.
  • ZHANG Jingjing,ZHAO Chenggui,YUAN Jianming
    Computer Engineering. 2016, 42(9): 76-82. https://doi.org/10.3969/j.issn.1000-3428.2016.09.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of high link stress in substrate network caused by traditional Virtual Network Embedding(VNE) algorithm,a new VNE algorithm is proposed.In the stage of node embedding,the importance degree of node in network is gotten by its connectivity and bandwidth properties,and the first virtual node is selected to embed.To determine the embedding scope of other virtual nodes,the place of the first embedding virtual node is determined as the centre.The appropriate path is selected within k-shortest paths by pre-request resource method.Experimental results show that compared with the deterministic node embedding with k-shortest path link and that with splittable link,this algorithm has better performance in embedding cost,cost/revenue,average link stress and virtual network request acceptance ratio.
  • ZHAO Taifei,LIU Xue,LIU Yijie
    Computer Engineering. 2016, 42(9): 83-88,93. https://doi.org/10.3969/j.issn.1000-3428.2016.09.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Ultraviolet communication data determines the security of helicopter emergency landing.In order to provide greater protection for More Important Bits(MIB),a novel approach to provide Unequal Error Protection(UEP) based on the block duplication method and improved Expanding Window Fountain(EWF) codes over erasure channels,named SUEP-LT code,is developed and discussed.Block duplication method,EWF and SUEP-LT code are simulated and compared over binary erasure channel.Experimental results show that,the SUEP-LT code has better performance in UEP.It can yield lower Bit Error Rates(BER) for the MIB,sacrificing the least reliability of the whole data.Priority and reliability of the important data are guaranteed.Thus,the security of helicopter emergency landing is improved.
  • SUN Minhong,SHAO Zhangyi,QIN Yuan,YAN Yunzhen
    Computer Engineering. 2016, 42(9): 89-93. https://doi.org/10.3969/j.issn.1000-3428.2016.09.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Since the Wolf Pack Algorithm(WPA) has some demerits,such as slow convergence and being easy to trap in local optimum,a Differential Evolution-Wolf Pack Algorithm(DE-WPA) is proposed and applied in spoofing jamming detection for Global Navigation Statellite System(GNSS).A Hammerstein model is applied to establish the model of the nonliner jammer or the satellite transmitter and the wireless channel at first.Then the DE-WPA is utilized to estimate parameters of the model.Next,a method is employed to identify the spoofing jamming with the estimated model parameters.Simulation results show that the DE-WPA is effective for the Hammerstein model system identification.It can obtain higher model parameter identification precision and deception jamming recognition rate than the Least Square(LS) estimation algorithm,the iterative algorithm and WPA algorithm.
  • MA Wentao,HU Chuang,WANG Wenjie,GONG Yili
    Computer Engineering. 2016, 42(9): 94-88,104. https://doi.org/10.3969/j.issn.1000-3428.2016.09.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Paxos algorithm based on message passing mechanism incurs a large amount of communication during execution.When applied to Wide Area Networks(WAN) environment,it is vulnerable to the limit of bandwidth,which affects the efficiency of the algorithm.Aiming at this problem,this paper optimizes the communication model of Paxos and presents an improved algorithm,which is named W-Paxos.It sets a proxy node in each data center for forwarding,disposing and sending WAN messages to reduce the messages in WAN massively so as to avoid network congestion and increasing delay.Since W-Paxos only optimizes the communication model without changing other parts of the protocol,it is applicable to almost all Paxos-based protocols.Experimental results show that,compared with Mencius and EPaxos algorithms,W-Paxos generates fewer messages,therefore it can retard the workload of the consensus leaders,increase the throughtput and reduce the communication delay.
  • YANG Jie,ZHOU Shengyuan,YU Fengqi
    Computer Engineering. 2016, 42(9): 100-104. https://doi.org/10.3969/j.issn.1000-3428.2016.09.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In underwater acoustic communication systems,most of the Doppler shift factor estimation methods cannot deal with the relationship between computational complexity and estimation accuracy under high relative motion speed.In view of this,this paper proposes a Doppler shift factor estimation method based on the Fast Fourier Transformation(FFT) frequency measurement and ambiguity function.First,the range of Doppler shift factor is determined in a small range by using FFT on the single-frequency pulse signal.Then,Doppler shift factor is estimated by using ambiguity function method in the range of determination.Simulation results show that estimation accuracy maintains higher than 0.02% when SNR is greater than -20 dB and relative motion speed is -60 m/s to 60 m/s.Computational complexity is significantly reduced compared with the ambiguity function method.
  • ZHU Majun,LEI Lei,CAI Shengsuo,DONG Tao
    Computer Engineering. 2016, 42(9): 105-109,115. https://doi.org/10.3969/j.issn.1000-3428.2016.09.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Using directional antennas is a valid method to improve bandwidth utilization in wireless Mesh networks.However,this brings new Medium Access Control(MAC) problems,such as deafness and new hidden terminals.To solve these problems,a novel time-division MAC protocol based on link scheduling for directional wireless Mesh networks is proposed.It uses an effective algorithm for concurrent transmissions to maximize network capacity and provide per-link fairness.In the protocol,time is divided into continuous frames of constant length and each frame consists of a scheduling subframe and a transmission one.Nodes perform channel sensing during the scheduling subframe and transmit packets in parallel during the transmission subframe.Simulation results show that compared with the typical Basic DMAC scheme,the proposed MAC protocol achieves significant improvement in network throughput and provides better per-link fairness.
  • SHEN Haibo,CHEN Yongchang
    Computer Engineering. 2016, 42(9): 110-115. https://doi.org/10.3969/j.issn.1000-3428.2016.09.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    TP393.08
  • XIANG Hongyin,YUAN Jinsha,HOU Sizu
    Computer Engineering. 2016, 42(9): 116-120. https://doi.org/10.3969/j.issn.1000-3428.2016.09.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    High fidelity in watermarked image can be obtained based on prediction and sorting.However,information cannot be embedded into a whole smooth block.For this problem,a novel reversible information hiding method based on location of benchmark pixels in smooth region is proposed for digital images.It takes the first pixel as the benchmark pixel in all smooth sub block with n pixels.The block location map is used for localization to predict gray value of other pixels in the block.Thus,the remaining n-1 pixels can be used to embed information bits.Experimental results verify the proposed method guarantees high fidelity.The method can also effectively improve the capacity and peak signal to noise ratio.
  • SHI Yanan,LI Jiangyin,KANG Baosheng
    Computer Engineering. 2016, 42(9): 121-125. https://doi.org/10.3969/j.issn.1000-3428.2016.09.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    To improve the ability of image tamper detection and recovery,this paper proposes a layered watermark algorithm based on the combination of the spatial and frequency domain.In layer 1,one part of the watermark information authenticating a single pixel in a block with the size of 2×2 pixels is embedded into self-block,and the other part together with recovery watermark are embedded into the mapping block.In layer 2,the image processed in layer 1 is divided into blocks with the size of 8×8.The frequency information is extracted and embedded into the mapping block.Three layers are used for image detection and recovering tampered image.This paper also proposes an offset value selection scheme based on chaos sequence and Torus isomorphic mapping to improve the safety of the secret keys.Experimental results demonstrate that the proposed algorithm can not only resist dictionary search attack,collage attack,blind attack and large area cropping attack but also locates the tampered blocks precisely with high quality of the recovered image.

  • GAO Zhenbin,BAI Xue,YANG Song,HE Jiaji
    Computer Engineering. 2016, 42(9): 126-131. https://doi.org/10.3969/j.issn.1000-3428.2016.09.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Hardware Trojan causes a huge threat to the reliability of the integrated circuit chips,so this paper proposes a Trojan detection method based on Hidden Markov Model(HMM).It extracts the characteristic parameters of original circuit data,trains the parameters to obtain the normal model,extracts the characteristic parameters of the data under test,and calculates the matching degree of the parameters with the model above to make analysis identification.Experimental results show that this method is effective in identification of Trojan.It can detect the hardware Trojan whose area ratio is 0.53%.
  • WANG Hui,WANG Tengfei,LIU Shufen
    Computer Engineering. 2016, 42(9): 132-137,143. https://doi.org/10.3969/j.issn.1000-3428.2016.09.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The traditional attack path prediction method based on Bayesian network attack graph is easy to produce redundant paths,and node confidence degree calculation is not precise enough.In order to solve these problems,this paper presents a new nine tuples attack graph model,and defines the resource vulnerability index and aggressive behavior risk.Combined with Attack Threat Index(ATI) analysis method,the attack path generation method based on threat index analysis is proposed.The concept of operating cost is introduced into the likelihood weighted sampling method to make node confidence degree calculation more precise and avoid redundant path generation.Analysis results show that the proposed method can effectively reduce the redundant path and improves the accuracy node confidence degree calculation.
  • XIAO Zhenjiu,LI Nan,WANG Yongbin,JIANG Zhengtao,CHEN Hong
    Computer Engineering. 2016, 42(9): 138-143. https://doi.org/10.3969/j.issn.1000-3428.2016.09.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In digital watermark technology of Singular Value Decomposition(SVD),to improve the robustness and solve the problem of watermark false alarm errors,an improved SVD strong robust watermark algorithm is proposed.The original image is decomposed using Contourlet Transformation(CT) and block SVD is performed on low-frequency coefficients.The product of left singular matrix and singular value matrix is selected as the principal components of the watermark,which are embedded by modifying the largest singular value of each sub-block.Experimental results show that the algorithm not only solves watermark false alarm errors,but also enhances the robustness to a certain extent on the condition of ensuring the transparency of the watermark.
  • FENG Siyu,LEI Yinjie,ZHOU Xinzhi
    Computer Engineering. 2016, 42(9): 144-150. https://doi.org/10.3969/j.issn.1000-3428.2016.09.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    3D face recognition with the availability of only partial data and single training sample is a highly challenging task.In order to address challenge,this paper defines the statistical feature of multiple triangles based on local key points,which is robust to partial facial data,large facial expressions and pose variations.Aiming at the single sample problem,this paper proposes a two-phase weighted Collaborative Representation(CR) classification method.A class-based probability estimation is calculated based on the extracted local descriptors as prior knowledge,and this probability estimation is used to be local constraint in the second stage of classification to enhance the discriminating ability.Experimental results show that the proposed method can improve the recognition rate in the case of the partial facial data and single training sample.
  • LIN Yi,WANG Zhibo
    Computer Engineering. 2016, 42(9): 151-157. https://doi.org/10.3969/j.issn.1000-3428.2016.09.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the time series whose slope fluctuation frequency is relatively fierce,time series piecewise algorithm with edge point extraction based on slope is easy to fall into local optimum.It cannot keep the overall features of original time series.For this problem,this paper proposes a Piecewise Linear Representation(PLR) method of time series based on first-order filtering,which is named fluotuation PLR_SFWF.It brings filtering in signal processing into unidimensional time series,revealing the elementary track of time series by smoothing slight fluctuation,so as to capture the points which keep the overall features of time series.Based on the priority queue,it classifies points with different degrees into different queues,getting the final time series PLR.Experimental results show that,for the time series whose slope fluctuation frequency is gentle,compared with other PLR algorithms,the fitting error of SFWF is smaller.For other time series whose slope fluctuation frequency is relatively fierce,compared with SEEP,SFWF has better global characteristics.
  • WANG Jikui,LI Shaobo
    Computer Engineering. 2016, 42(9): 158-162. https://doi.org/10.3969/j.issn.1000-3428.2016.09.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the assumption of only one truth value,most of current truth value discovery algorithm cannot process the multiple truth value condition.In order to solve this problem,aiming at the multiple true value discovery problem in conflicting Deep Web data,this paper defines authority of view and credibility of description,inspired by the idea of Hypertext-Induced Topic Search(HITS) algorithm.The authority of view and the credibility of description depend on each other.On this basis,it constructs link graph of views,and proposes an iterative multiple truth value discovery algorithm,named MTF.When the algorithm converges,the view with maximum authority is the truth value.Experimental results on Book-Authors datesets show that the accuracy of MTF can be improved greatly than standard VOTE algorithm.
  • DAI Ruirui,MA Yongjie,BAI Yulong,LI Zhi
    Computer Engineering. 2016, 42(9): 163-167. https://doi.org/10.3969/j.issn.1000-3428.2016.09.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to avoid premature convergence and slow convergence of the complex function optimization in traditional Differential Evolution(DE) algorithms,an improved DE algorithm based on pattern search is presented.A mechanism to judge premature convergence is introduced.If premature convergence is detected,this algorithm will regard the optimal solution of the current population as an effective initial point for pattern search to avoid local optimum and strengthen the global optimization ability.Several typical test functions are used in simulation,and results show that the proposed algorithm has stronger ability to jump out of local optimal solution,higher convergence precision and stronger optimization performance compared with the basic DE algorithm and the DE algorithm based on chaos search.
  • HUANG Xuehua,KONG Fang,ZHOU Guodong
    Computer Engineering. 2016, 42(9): 168-173. https://doi.org/10.3969/j.issn.1000-3428.2016.09.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Using the traditional machine learning method,an Chinese coreference resolution standard platform is built.After the introduction of quadratic expression recognition classifier,the performance of the standard platform is not promoted.Aiming at this problem,an improved expression recognition method is proposed.The method only classifies person pronoun and proper noun,and all common noun phrases are maintained.Experimental results show that the improved method can effectively improve the performance of automatic anaphora resolution in Chinese compared with traditional rule-based expression recognition method.
  • ZHANG Juan,JIANG Yun,HU Xuewei,SHEN Jian
    Computer Engineering. 2016, 42(9): 174-179. https://doi.org/10.3969/j.issn.1000-3428.2016.09.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Restricted Boltzmann Machine(RBM) is one of the important models in deep learning.The Convolutional RBM(CRBM) model based on RBM is widely used in image processing and speech recognition.However,the long training time is still a problem of the CRBM model that cannot be ignored.In this paper,the Fast Persistent Contrastive Divergence(FPCD) algorithm is used to train the CRBM,to improve the learning speed and classification accuracy of the model.Experimental results show that,compared with PCD,CD_1 and other algorithms,FPCD can improve the classification performance of CRBM.
  • QIN Huazheng,HU Zhongshun,YANG Deqing,XIAO Yanghua
    Computer Engineering. 2016, 42(9): 180-185,191. https://doi.org/10.3969/j.issn.1000-3428.2016.09.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    An entiting categorizing and correlation degree ranking algorithm based on related entity category template is proposed to automatically classify the fragmented encyclopedia entities,since the current encyclopedia data knowledge is scattered and related entities are hard to build in large scale by human labor.The proposed algorithm mines the category template of related entities with respect to a query entity using the referenced entities in the page corresponding to the similar category entities,then maps the related entities into the template according to their category respectively,and ranks the entities in the template according to their correlation degree.Experimental results show that the proposed algorithm can achieve better entity categorizing result when compared with clustering methods and lower ranking complexity when compared with the method which sorts the entity correlation degree first.Furthermore,the algorithm significantly reduces the human labor cost in building relevant entities.

  • XIA Qing,YAN Xin,YU Zhengtao,WANG Jiancheng,GAO Shengxiang,HONG Xudong
    Computer Engineering. 2016, 42(9): 186-191. https://doi.org/10.3969/j.issn.1000-3428.2016.09.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is a hot research point of analyzing and discovering bilingual topics.However,there is no further research on specific contexts.So this paper puts forward a similarity calculation method for Sino-Vietnamese context based on bilingual subject distribution words in Sino-Vietnamese bilingual news texts.It is mixed with element features of news such as titles,key words and entities,integrates the news feature information into the context similarity calculation to construct bilingual text similarity matrix,and uses adaptive K-means algorithm to cluster Sino-Vietnamese bilingual news texts in order to analyze Sino-Vietnamese bilingual news topics.Experimental results prove that the accuracy rate,recall rate and F-measure of the proposed method are higher than that of the calculation method using only news text similarity and K-means clustering method.
  • CHEN Cheng,PAN Zhenghua, LüYongxi
    Computer Engineering. 2016, 42(9): 192-196,201. https://doi.org/10.3969/j.issn.1000-3428.2016.09.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The negative relationship between knowledge and information is becoming more and more important in the field of information processing.Based on FLCOM with three kinds of negation,this paper presents a new fuzzy comprehensive evaluation method.This paper proposes the concept of λ-medium negation proposition and λ-interval function.It gives a fuzzy comprehensive evaluation method based on FLCOM according to the principle of maximum membership degree of fuzzy sets,and the method is applied in the case of Songpan earthquake disaster.The results are compared with the grade evaluation results of the unascertained measure model and the grey system theory.The results show that the fuzzy comprehensive evaluation method based on the fuzzy propositional logic system FLCOM is reasonable and effective.It fully considers inner negative relationships between different evaluation levels.
  • CAI Biao,TUO Xianguo,SANG Qiang,YANG Kaixue,LIU Lizhao
    Computer Engineering. 2016, 42(9): 197-201. https://doi.org/10.3969/j.issn.1000-3428.2016.09.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that complex network community detection process is complex and time complexity is high,according to the numerical relationship of triangle clique between nodes,a community detection algorithm is designed based on triangle clique attractors.This algorithm can start on an arbitrary node.The maximum node of a triangle clique attractor of a node is divided into the same community,till all nodes in the network are visited,and the network is divided into several communities.By determining the threshold of a number of communities,all these communities will be optimized until the number of the communities reaches the threshold.Experimental results show that the time complexity of the proposed algorithm is low,and the community structure of the real network and benchmark network can be classified well.
  • ZHANG Qing,Lü Zhao
    Computer Engineering. 2016, 42(9): 202-207,213. https://doi.org/10.3969/j.issn.1000-3428.2016.09.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Domain question classification plays a central role in Question and Answering(Q&A) systems.Lots of current research work on question classification focuses on open domains while few of them pays attention to special domains.The domain questions are always short and have the issue of data sparseness.Hence,this paper proposes a method for domain question classification based on topic expansion.This algorithm mainly consists of two components:feature selection and feature expansion.It first extracts feature words,which are the bases of feature expansion,from raw question text through feature selection method CHI.Then it uses Latent Dirichlet Allocation(LDA) topic model to analyze the universal dataset to obtain the topic distribution.To avoid noisy topics,this paper adopts topic entropy to obtain high quality topics.Finally,it expands question text using the words from high quality topics and classifies the expanded question text using Support Vector Machine(SVM).Experimental results show that the proposed method performs better than the traditional text classification method TFIDF and is helpful to improve the performance of Q&A systems.
  • HUANG Miao,WANG Liutao,ZHANG Haichao
    Computer Engineering. 2016, 42(9): 208-213. https://doi.org/10.3969/j.issn.1000-3428.2016.09.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the limitations of sub-space model in face recognition with image sets,a face recognition method with image sets based on Gabor wavelet transform and K-L Gaussian Riemannian Manifold(GT-GRMD) discriminant is proposed.Firstly,Gabor wavelet transform is used to characterize feature vectors of faces in image sets.Then,Gaussian components with priori probabilities in Gaussian Mixture Model(GMM) are used to represent each image data set,and credible K-L kernel function is used to represent different distances between Gaussian components.Finally,the proposed weighted kernel discriminant analysis is applied to maximize distance between the Gaussian distributions,accessing the underlying data distribution.Experimental result show that,compared with the method based on linear affinity sub-space,the one based on nonlinear manifold and the one based on statistical model,the proposed method has higher recognition rates on YTC and COX,and the ROC curve area on YTF is up to 85.91,with the best performance.Testing and training time conparision results also show that this method is more suitable for off-line system of face recognition with image sets.
  • TIAN Xuting,GUO Dan
    Computer Engineering. 2016, 42(9): 214-219. https://doi.org/10.3969/j.issn.1000-3428.2016.09.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to enhance frame rate conversion,a video motion-compensated interpolation method based on Codebook is proposed.In this paper,the Motion Vector(MV) is obtained by using the traditional block matching method.In addition,the Codebook model is used to obtain background and foreground regions.Then,a variable block processing method is applied in foreground region.Meanwhile,a block merging algorithm is used to guarantee the integrity of the edge structure of the object in the foreground region.Vector Median Filter(VMF) and vector smoothing are used to deal with the foreground region,which can effectively reduce the shadow phenomenon and block effect.Experimental results show that the application of Codebook in motion-compensated interpolation method can get more accurate and robust foreground area than Liner Frame Interpolation(LFI),VMF and soft-decision motion estimation method.It can meet people’s demand for high quality visual effects.
  • HU Dan,ZHOU Xingshe,XU Wanjun,HOU Zhiqiang
    Computer Engineering. 2016, 42(9): 220-225. https://doi.org/10.3969/j.issn.1000-3428.2016.09.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most traditional features may not be good enough for robust visual tracking in complex environments.This paper proposes a new visual tracking algorithm using Convolutional Neural Network(CNN) to learn robust generic object feature representation,which fuses with Local Binary Pattern(LBP) texture to offset CNN’s shortcoming in rotation invariance.Aiming at the slow training speed of CNN,it uses offline pre-training on an auxiliary tiny image dataset to improve the efficiency of online feature extraction.Experimental results on an open tracker benchmark show that this algorithm is more accurate,improving tracking precision by 14.08% on average,and more efficient improving lomputaction efficiency by 10.47% on average,compared with DLT.It suits target changes and background influence,and has strong robustness and tracking efficiency.
  • HUANG Dandan,SUN Yi
    Computer Engineering. 2016, 42(9): 226-234. https://doi.org/10.3969/j.issn.1000-3428.2016.09.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to enhance the robustness of the target model in visual tracking,a local discriminant tracking method based on sparse weighing is proposed within Bayesian inference framework.The target is represented as a combination of multiple local discriminative sparse models.The weight value is assigned for each local model according to its significance of expressing the target.The target is modeled as a weighted combination of local models to alleviate the influence of appearance changes and improve the robustness of the model.During tracking,the candidate which is the most similar to the target model is chosen as the tracking result,and occlusion detection is added to reduce the influence of occlusion.Besides,the target model is updated automatically to avoid drifting.Experimental results show that the proposed method can maintain a robust tracking when the target appearance changes.
  • LI Yanwei,WANG Xuerui
    Computer Engineering. 2016, 42(9): 235-239. https://doi.org/10.3969/j.issn.1000-3428.2016.09.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the characterisitcs of multi-channel data of the polarimetric Synthetic Aperture Radar(SAR) image,an improved algorithm of polarimetric SAR image decorrelation target detection is proposed.The classical Polarimetric Matching Filter (PMF) metric is anaylzed in solving the optimal weight value and the subsequent derivation of the statistical distribution.The real polarimetric SAR data is used to validate the unignored correlations between the channels of the PMF metric for different surface features.The channels of the PMF metric are decorrelated according to the decorrelation theory for two-dimensional Gaussian distribution,thereby obtaining a new Decorrelated Polarimetric Matched Filter(DPMF) metric.The solution for the optimal weight of the DPMF metric is suitable for more general case compared with PMF metric.There is no correlation betwwen the channels of the DPMF metric after decorrelation and meets the independent distribution of complex Gauss random variables which makes the subsequent derivation of the statistical distribution more stringent.Experimental results for the real polarimetric SAR data show that the detection algorithm based on the DPMF metric can efficiently distinguish target and clutter with a high detection rate and less false alarm rate.
  • ZHANG Jianming,LI Pei,LI Xudong,WU Honglin
    Computer Engineering. 2016, 42(9): 240-245. https://doi.org/10.3969/j.issn.1000-3428.2016.09.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As current algorithm cannot effectively remove Gaussian and salt-pepper hybrid noise,this paper proposes a hybrid denosing algorithm combining Adaptive Median Filtering(AMF) with improved sparse representation.It uses the AMF to realize initialization for noisy image detect and suppress salt and pepper noise,and then removes Gaussian noise by means of improved K-Singular Value Decomposition(K-SVD) dictionary learning method and Backtracking-based Adaptive Orthogonal Matching Pursuit(BAOMP) sparse coding method.Experimental result shows the proposed algorithm gains higher Peak Signal to Noise Ratio(PSNR) and faster denoising speed than Weighted Encoding with Sparse Nonlocal Regularization(WESNR) algorithm in larger Gaussian noise condition.
  • XU Wentao,CHEN Zhaojiong
    Computer Engineering. 2016, 42(9): 246-251. https://doi.org/10.3969/j.issn.1000-3428.2016.09.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the wlor of principal object may be transfered improperly in the point matching-based color transfer method,a color transfer algorithm with controllable principal object is proposed.The algorithm separates the principal objects from the background by building a mask of principal objects via image segmentation and carries out shape analysis on histograms of both source and target images,figures out zeros of histograms using forward differential computation,identifies the structure of peaks and valleys of both histograms by taking into account the information of second order derivatives,and completes the progressive color transfer of the principal objects through gradual histogram peak matching.Experimental results show that the algorithm has better control effect for principal object color transfer without much manual intervention.
  • ZHENG Liang,TAO Qian
    Computer Engineering. 2016, 42(9): 252-256. https://doi.org/10.3969/j.issn.1000-3428.2016.09.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Though fish eye lens has a large field of view,the images taken with the fish eye camera composed of fish eye lens have serious distortion which is not conducive to the human eye observation and machine recognition.An improved algorithm is proposed based on the existing nine point non-iterative optimization algorithm to realize fish eye self calibration and automatic correction.The Maximally Stable Extremal Region(MSER) is combined with Scale Invariant Feature Transform(SIFT) algorithm to automatically get one pair of fish eye image feature matching points.Using the kernel density estimation as non-iterative method to replace the random sample consensus algorithm,it achieves fish eye self calibration and image distortion correction by applying the optimal parameter into the distortion model.Without the prior knowledge about the scene information and the camera lens parameters,the algorithm can automatically match feature points of the two images which have overlap regions to achieve fish eye image correction.Calibration and correction results show that different with the original algorithm which requires artificial selection of matching point,the proposed algorithm can acquire the feature matching point automatically,and the calibration result is correct,this makes it possible to automatically match and acquire the fish eye image correction.
  • HOU Yueen,LI Weiguang
    Computer Engineering. 2016, 42(9): 257-261,267. https://doi.org/10.3969/j.issn.1000-3428.2016.09.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the purpose of improving the robustness and accuary of target tracking algorithms in complicated conditions,under the particle filtering framework,this paper proposes a continuous time target tracking algorithm of 2-norm minimizaiton based on sparse representation.Firstly,candidate targets are linearly reconstructed by a template dictionary,and the coefficients are sparsely constrained by 2-norm.As a result,a 2-norm constraint objective function is built.Secondly,the proposed tracker takes the temporal consistency of target state inter-frame residual into account,and embeds a residual consistency constrain term into the objective function.The solution of the objective function is designed by taking the partial derivative.Thirdly,a method which combines sparse representation and principal component analysis is used for updating the dictionary to realize accurate targer tracking.Experimental results show that the proposed tracking algorithm has better tracking robustness and anti-interference ability than existing tracking algorithms.
  • XU Junliang,CAO Jian,YANG Kaibin
    Computer Engineering. 2016, 42(9): 262-267. https://doi.org/10.3969/j.issn.1000-3428.2016.09.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to monitor the operating condition of insulators,an improved binarization method is proposed to make segmentation of the insulator image.Local mean of each pixel is applied as a reference threshold to make the first segmentation,and the initial object points,background points and the approximate edge contour are gotten.For each pixel,the algorithm judges its quality through its local properties,and does the second segmentation to get the better result.Experimental results show that compared with the OTSU algorithm and Wellner algorithm,the proposed algorithm can segment insulator images with intensity inhomogeneity correctly,and is more efficient and robust.
  • ZHU Yulian,ZHAO Chunyang,GONG Weijiang
    Computer Engineering. 2016, 42(9): 268-272,278. https://doi.org/10.3969/j.issn.1000-3428.2016.09.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Object tracking algorithm with fixed tracking window may cause failure due to the increase of background information in tracking window for a smaller target or the decrease for a larger target.To solve this problem,a scale-adaptive target tracking algorithm based on fragments representation is proposed.Firstly,structured output Support Vector Machine(SVM) is used for spatial locatlization,then the object is represented by multiple image fragments.Finally,fragments of different scales are compared with Earth Mover’s Distance(EMD) metric to determine the final scale.Experimental results show that the proposed algorithm cannot only cope with scale variation but behave well in appearance change,brightness change and partial occlusion of target.The tracking error is decreased which has a better performance compared with existing algorithms including LOT,OAB and CXT.
  • LI Shuping,CHENG Jun,LI Hengyu
    Computer Engineering. 2016, 42(9): 273-278. https://doi.org/10.3969/j.issn.1000-3428.2016.09.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the effect of the environmental factors and complex coordinate transformation for most head-eye systems,on the basis of the anatomical and physiological research of human eyes and the oculomotor neural circuits,an adaptive control system model of Vestibule-ocular Reflex(VOR) and smooth pursuit is established through the research on the mechanism of the crucial nerve nuclear functions and the visual information transmission and processing.Combined with humans’ behavior characteristics,a control method for the head-eye coordination is proposed and is simulated with Matlab.Simulation results show that this motion control method not only can be applied to bionic robot vision system,but also has good robustness.
  • LI Yin,WANG Lifu,SUN Yi
    Computer Engineering. 2016, 42(9): 279-285. https://doi.org/10.3969/j.issn.1000-3428.2016.09.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional seismic signal compression methods do not process the seismic signal according to its characteristics,so the compression effect is poor.Aiming at this problem,by using the self-similarity of seismic signal,a new method combining clustering with dictionary learning is proposed.The Fuzzy C-Means(FCM) clustering algorithm is used to cluster the samples,and a dictionary learning model is built.Through the transformation of the objective function,the model can be transformed into an ordinary dictionary learning model,and finally the K-Singular Value Decomposition(K-SVD) algorithm is used to solve the model.Experimental results show that when the range of compression ratio is between 8.5~18.8,the signal to Noise Ratio(SNR) of the method is 1 dB~4.5 dB higher than discrete cosine transform method,1 dB~4 dB higher than gabor,and 0.5 dB~1 dB higher than K-SVD algorithm.
  • YANG Fan,HE Min,SHI Jihong,WU Hao,XU Tao,LI Le
    Computer Engineering. 2016, 42(9): 286-291. https://doi.org/10.3969/j.issn.1000-3428.2016.09.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the shortages of pushing service of Grassroots party building integrated service platform in Yunnan province,such as flatting and inefficiency,this paper proposes a party construction information pushing strategy based on the Latent Dirichlet Allocation(LDA) theme model.It does the unsupervised clustering for user history data by LDA model,deduces the party members preference according to the feedback matrix,then filters and pushes the message to the selected people considering the relevance between the message and the party members’ preference.At last,as the test data,the real mobile news text on the website of Yunling Pioneer is input in the experimental tests and the results show that this strategy is more stable.It can meet the needs of the real application compared with the traditional collaborative filtering algorithm and undifferentiated pushing method.
  • LIU Di,GUAN Xin,LI Qiang,TENG Jianfu
    Computer Engineering. 2016, 42(9): 292-296,304. https://doi.org/10.3969/j.issn.1000-3428.2016.09.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the quality of recording music in the life scene,this paper presents an improved music signal de-noising method.The music containing noise is preprocessed based on the low-rank characteristic of solo music.The Robust Principal Component Analysis(RPCA) is applied in the field of music de-noising,and the augmented lagrange multiplier method is selected to solve the optimization problem of RPCA.The comparison with wavelet de-noising and Independent Principal Component Analysis(IPCA) de-noising is made.Perceptual Evaluation of Audio Quality(PEAQ) and Signal to Noise Ratio(SNR) are introduced as the evaluation indexs of comparison experiments.Experimental results show that the SNR of the music signal can be improved by about 2 dB~5 dB,and PEAQ of the music is also improved.The proposed method has good de-noising effect for solo music.
  • YANG Shaohua,WANG Ying,LIU Gang
    Computer Engineering. 2016, 42(9): 297-304. https://doi.org/10.3969/j.issn.1000-3428.2016.09.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve maintenance efficiency,an aircraft maintenance work scheduling model and an optimization algorithm are proposed.By using formal and graphical presentation of flexible job shop scheduling problem,the coupling constraint is set to construct maintenance work scheduling model,and the execution steps of Genetic Algorithm(GA) are set where the coupling operator is introduced to adjust the process sequence to avoid the situation that chromosome violates the coupling constraint.Experimental results show that the algorithm can satisfy the needs of aircraft maintenance work scheduling and present well optimization performance in Brandimarte experiment.
  • WANG Yangming,ZHAO Li
    Computer Engineering. 2016, 42(9): 305-309,314. https://doi.org/10.3969/j.issn.1000-3428.2016.09.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the logistics management and transport scheduling system based on BeiDou navigation satellite System(BDS),how to achieve cost-effective dynamic schedulingof transport vehicles is an important issue.This paper focuses on the optimization of dynamic vehicle scheduling based on multiple vehicles and multiple constraints.After analyzing and describing the problem and through the establishment of appropriate mathematical models,this paper proposes staged algorithm of rigid constraint classification-objective function optimization.Simulation results show that the algorithm can fully meet the load and volume requirements of multiple vehicles as well as the time window requirement.It not only can get a good scheduling result but also has the characteristics of faster optimizaing speed and good convergence consistency.
  • HONG Guodong,MIN Weidong
    Computer Engineering. 2016, 42(9): 310-314. https://doi.org/10.3969/j.issn.1000-3428.2016.09.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The current methods to define rules manually in the network fault management do not consider the influence of redundancy and inaccurate date on the effectiveness and performance of the rules.Aiming at the problem,an automatic rule generation method is proposed to improve the efficiency of rule matching.It uses neighborhood rough set to reduce the fault attributes of network faults,limits the threshold of reduction results and then generates rules automatically.To solve the problem of multiple rule matching about monitoring data,a rule matching algorithm based on value weight is proposed.It can find a rule that has the highest matching degree with the current monitoring data in the case of multiple matching.Experimental results demonstrate that compared with the method to define rules manually,the proposed method can increase the efficiency of rule matching by 2.5 times without reducing the rate of fault diagnosis.
  • ZHEN Hui,WANG Ouyang,YAO Jian,HE Jiangshan,HUANG Hai,GENG Chenge
    Computer Engineering. 2016, 42(9): 315-321. https://doi.org/10.3969/j.issn.1000-3428.2016.09.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing smartphone data communication solutions,such as Bluetooth,WiFi,USB and so on,have many shortcomings,like high hardware cost and power consumption,thus not suitable for impromptu small data.To overcome these shortcomings,this paper presents a data communication scheme based on smartphones.It uses audio interface and builds the physical layer,data link layer and application layer,making data communication systems and specific application logic separate from each other and ensuring the independence of the data communication system.And it designs the associated communication mechanisms and protocols to ensure the accuracy and stability.Aiming at the compatibility problem caused by audio interface differences between smartphones,a signal reconstruction algorithm is given to solve the problem of signal distortion over software.Experimental results show that it’s a simple and convenient way to achieve stable data communication between the smartphone and detection terminals,and has good compatibility and versatility.