Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 August 2016, Volume 42 Issue 8
    

  • Select all
    |
  • DU Ran,HUANG Qiulan,KAN Wenxiao,WANG Cong,XU Qi,CHEN Gang
    Computer Engineering. 2016, 42(8): 1-8. https://doi.org/10.3969/j.issn.1000-3428.2016.08.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    HazelNut is a blockbased hierarchical storage system,which has special metadata requirements,including huge memory cost,fast access speed and everlasting data growth.To meet those requirements,a scalable highperformance metadata storage ring,named SCRing,is presented.SCRing consists of double rings and a cache table.One of the rings is named as shell ring to store the metadata information of data block,and the other is named as chord ring to locate the metadata information of data block.The cache table is used to cache the memory address of block metadata.The mapping relationship between the two rings are got through message digest algorithms.According to the SCRing demands on message digest algorithms and the Linux kernel version,a series of tests is made on digest generation speed,uniformity and constancy to get algorithm category and usage mode.Besides,I/O performance test is done on the basis of the above message digest algorithm tests.The results prove that SCRing is featured with high performance and scalability,which meets the metadata access requirements of HazelNut system.
  • JIANG Fan,FAN Xiuping
    Computer Engineering. 2016, 42(8): 9-13. https://doi.org/10.3969/j.issn.1000-3428.2016.08.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Testing and Test Control Notation Version 3(TTCN-3) is an international standardized test specification language.It supports both messagebased communication method and procedurebased one.To meet the test requirement of remote procedure call in distributed systems and cloud application platforms,an efficient compiler design scheme for procedurebased communication is proposed.It uses the modular idea to realize the automatic translation from the TTCN-3 language to the C++ language and simplifies the design of actuator in the later phase.Experimental results show that this scheme can not only reduce the dependence of the execution process on the platform in later phases,but also improve the maintainability of the code due to the similarity between the compiled C++ code and the TTCN-3 code.
  • ZHAO Gaoyi,ZHENG Qilong
    Computer Engineering. 2016, 42(8): 14-18,23. https://doi.org/10.3969/j.issn.1000-3428.2016.08.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Currently,the programming model under BWDSP104X compiler is based on the word unit addressing modes,lacking of support for non 32-bit wide data and incompatible with byte unit addressing modes.With the modification of related frontend data type,the intermediate code based on byte addressing mode is generated.A series of 32-bit machine instructions is outputted by backend to fulfill byte address transformation or data manipulation,and the expansion of byte addressing mode is implemented.Register pairs are used to simulate 64-bit data access and operation,improving the accuracy of floatingpoint data operation.Experimental results show that the compiler is compatible with the byte addressing mode as well as the 64-bit floatingpoint data type.It can well meet the requirement of highspeed realtime signal processing.
  • PENG Zhan,LIANG Gen,ZHOU Bing
    Computer Engineering. 2016, 42(8): 19-23. https://doi.org/10.3969/j.issn.1000-3428.2016.08.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For formally describing feature interaction problems in telecommunication system accurately and compactly,this paper applies Z language into the research on these feature interaction problems.It does the formal description for three main types of feature interaction,which contains integrality violation,same trigger condition and bad loop.It uses Z language to develop precise and unambiguous formal specification for the specific cases of the three types of feature interaction,describing the feature interaction between businesses.It analyzes and formally verifies the feature interactions based on Z specification.Researchers can grasp the collisions and faults in the system through formal specifications and verification results.The proposed method can prevent and solve the problems of feature interaction in telecommunication systems,and safeguard the stability of the base system and the new function module.
  • SHI Weihang,ZHOU Yan,LIN Nan
    Computer Engineering. 2016, 42(8): 24-27,33. https://doi.org/10.3969/j.issn.1000-3428.2016.08.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the data collision problem between multiple readers and multiple tags in Radio Frequency Identification(RFID),this paper proposes an adaptive hierarchical collaborative anticollision algorithm based on Carrier to Noise Ratio(CNR) and error sensing.It analyzes the physical and electrical properties of RFID tags and readers.CNR and error sensing analysis model is designed for various application scenarios.The variation of bit error rate and packet loss rate is combined with the crosslayer design of physical layer and network layer to build a collaborative control architecture to solve the data collision problem between multiple readers and multiple tags.Experimental results show that the algorithm performs better on collision probability,tag recognition rate and the number of reader paging than the binary tree anticollision algorithm.
  • YANG Xiutao,CHI Peng,DU Yukun,XU Linwei
    Computer Engineering. 2016, 42(8): 28-33. https://doi.org/10.3969/j.issn.1000-3428.2016.08.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Assertion-based verification is widely used in IC design and verification for improving verification efficiency and quality.However,manually writing assertions costs a lot of time and manpower,which greatly limits the application of assertions.To solve this problem,this paper proposes an automatic assertion generation method based on demands.Signal combining algorithm and waveform analysis algorithm are given by regulating waveform description way,defining the behavior window and parsing the register transfer level codes.The automatic assertion generation tool,which contains wave library,assertion library and wave parser,is then designed.Experimental result shows that this method can meet the requirement for generating 10 to 20 assertions per 100 lines of register transfer level codes,and the generated assertions have high accuracy.
  • YANG Shihan,WU Jinzhao,DING Guanghong,QIN Donghong
    Computer Engineering. 2016, 42(8): 34-38,45. https://doi.org/10.3969/j.issn.1000-3428.2016.08.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the low fidelity and nonnormative description of formal modeling in Analog and Mixed Signal(AMS) circuit,a formal modeling method is proposed based on nodal analysis of Kirchhoff’s Current Law(KCL).An extension to Computation Tree Logic(CTL) formula is developed to describe the discrete events and dynamic behavior of AMS circuit and reserves many physical characteristics of the circuit,which guarantee the accuracy and credibility of the properties’ verification.Taking the ring oscillator as an example,it illustrates specific implementation processes of the proposed method and verifies the oscillator circuit by Coho.Experimental result shows that the proposed method is vaild.
  • WANG Zhifan,YE Qingwei,ZHOU Yu,WANG Xiaodong
    Computer Engineering. 2016, 42(8): 39-45. https://doi.org/10.3969/j.issn.1000-3428.2016.08.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In wireless sensor networks,system with low power consumption is achieved by controlling the sleep time in many of current low-power protocols.Because of the complex and diverse network system,accurately estimating the sleep time of routing node is difficult.In order to solve the above-mentioned technical difficulty,a queuing theory model is introduced in this paper to estimate the best sleep time of local routing node,thereby enhancing the performance of low-power routing node.Firstly,for a single routing node,the queuing theory is employed for mathematical modeling to estimate the sleep time.Secondly,the Zigbee technology is utilized to build a specific wireless sensor network system,and certain hardware is applied to control the sleep time of routing node.Finally,the system is applied into Dasan project to detect the temperature and humidity,using the gateway to monitor the detected parameters remotly.It can be seen from the results that the proposed method reduces energy consumption of the system significantly.
  • SONG Pubin,SUN He,WANG Zhaojun,CHENG Zijing,WANG Mengyuan
    Computer Engineering. 2016, 42(8): 46-51. https://doi.org/10.3969/j.issn.1000-3428.2016.08.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The distribution of nodes is sparse;the connection and disconnection of links are frequent;and the amount of transmission data is huge in the High-resolution Satellite Sensor Network(HRSSN).These lead to network congestion and considerable packet loss rate.In order to solve the problems,the method of Snapshot Integration Routing(SIR) is proposed.This method integrates and plans the snapshots of network topology,based on the characteristics of periodic and predictable HRSSN as well as the routing mechanism of storage,waiting and transmission in the Delay Tolerant Network(DTN).And,it takes minimizing the maximum link utilization rate as the object to build the mathematical model and achieve the network load balancing.Experimental results show that compared with Contact Graph Routing(CGR) and Probabilistic Routing Protocol Using History of Encounters and Transitivity(PRoPHET),the method effectively avoids network congestion,achieves load balancing,and reduces the packet loss rate in HRSSN.
  • YANG Han,LIU Xuchun,NA Liantao
    Computer Engineering. 2016, 42(8): 52-58,63. https://doi.org/10.3969/j.issn.1000-3428.2016.08.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of the current Wireless Sensor Network(WSN) clustering technology,updating the cluster region based on active polling mechanism needs to update cluster head at set intervals,which is easy to cause problems like cluster head node failure due to severe energy consumption and network congestion.Aiming at these problems,a new algorithm for cluster region update in WSN is proposed based on passive trigger mechanism.Firstly,the disabled cluster head is effectively detected by controlling the cluster region energy threshold,and next update of the cluster head node is only executed when the energy of the cluster head is lower than the normal working energy.Then the best data transmission link is got after the new cluster is formed,so as to achieve efficient upload of the sensing data and improve the performance of the link.Simulation result shows that the proposed algorithm can effectively reduce the congestion in the process of data acquisition and update,improve the bandwidth utilization ratio and network stability,and extend the lifetime of the network compared with EBDGA algorithm and PEDAP algorithm.
  • LIU Yun,CHEN Changkai
    Computer Engineering. 2016, 42(8): 59-63. https://doi.org/10.3969/j.issn.1000-3428.2016.08.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In random networks,it is necessary to keep contact between a node and its neighbor nodes to ensure communication.An Optimal Energyefficient Neighbor Discovery Algorithm(OEENDA) is proposed.It discovers neighbor nodes through detection mechanisms within a delay bound.When the probability of connection error is minimum,a minimum energy consumption of nodes is achieved.Simulation results show that compared with Quorumbased Power Saving(QPS) algorithm and Adaptive and Asynchronous Rendezvous Protocol algorithm,OEENDA algorithm can improve the effectiveness of a node discovering its neighbor nodes,and can reduce the energy consumption of nodes.
  • CHEN Xujun,ZHU Yufang,HU Junhong,MA Deyu
    Computer Engineering. 2016, 42(8): 64-68. https://doi.org/10.3969/j.issn.1000-3428.2016.08.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at varied channel environment during test and training,this paper proposes a new Kernel Recursive Leastsquares(KRLS) algorithm with sliding-window approximately linear dependence sparsification.Kernel matrix size is only relative to the width of sliding window.The nearest L data in the dictionary are selected to test approximately linear dependence criterion.System consumption and complexity are redued to overcome the shortcomings of ALD-KRLS whose kernel matrix is linearly increasing with the size of the dictionary.When eigenvalue spectum of autocorrelation matrix of training data is more than 40,the proposed SWALD-KRLS outperforms the SW-KRLS with 3 dB of improvement in Mean Squared Error(MSE) performance,and obtains less misadjustment in stationary enviroment.Simulation results show that the proposed algorithm achieves faster convergence and obtains better MSE performance against ALD-KRLS and KRLS when the channel changes during the training phase.
  • TIAN Xinji,YANG Dong,LI Ya
    Computer Engineering. 2016, 42(8): 69-72,79. https://doi.org/10.3969/j.issn.1000-3428.2016.08.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the reliability of communication systems,an improved spacetime processing scheme is proposed for Multiinput Multi-output(MIMO) X channel with three antennas for each user.In this scheme,codeword with the dimension of 3×6 is designed for each user.The unwanted codewords are removed through interference alignment pre-coding and linear processing on the received signals.The wanted codewords are orthogonal in their transmission by the other level pre-coding,so the interference between codewords is cancelled.Experimental result shows that the proposed scheme can improve the transmission efficiency of channel.
  • YUAN Guixia,ZHOU Xianchun
    Computer Engineering. 2016, 42(8): 73-79. https://doi.org/10.3969/j.issn.1000-3428.2016.08.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    To improve the efficiency of data sensing,collection and transmission in mobile crowd sensing network,this paper presents an opportunistic transmission mechanism based on QoS sensing and cooperative competition.In the mechanism,a mobile crowd sensing network model of multi-relay mobile node cooperative transmission is established based on spatial frequency correlation.The sensing area is divided according to the opportunity perception cycle and data acquisition frequency.The QoS sensing analysis model including measures like throughput,transmission delay and packet loss rate is built based on the mobile node dynamic transmission model.In order to weaken channel contention and optimize power allocation,this paper designs a cooperation and competition strategy.Analysis results show that the proposed mechanism has higher throughput as well as lower delay,packet loss rate and transmission load compared with direct transmission scheme and cooperative transmission scheme without direct transmission path.

  • LI Yan,DAI Shifang,CHANG Xiangmao
    Computer Engineering. 2016, 42(8): 80-84. https://doi.org/10.3969/j.issn.1000-3428.2016.08.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Fiber-Wireless(FiWi) access network which integrates wireless subnetwork and fiber subnetwork via Optical Network Unit(ONU) is now widely deployed as an infrastructure to provide Internet access.During network operation,how to save energy and then reduce maintenance cost is becoming a hot research topic in FiWi.For the downstream data transmission over FiWi,this paper studies the problem of energy-efficient downstream data distribution,while assuring the quality of data transmission.It aims to support downstream transmission by using minimum number of ONUs for energy saving,proves the NP-hardness of the problem,and proposes a heuristic algorithm based on linear programming relaxation.The performance lower bound of the algorithm is evaluated by theoretical analysis.Simulation results demonstrate that this algorithm can achieve better transmission performance and higher energy efficiency.
  • LIU Haike,LI Jilin,YOU Qidi,ZHANG Huajian
    Computer Engineering. 2016, 42(8): 85-90. https://doi.org/10.3969/j.issn.1000-3428.2016.08.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In traditional two-layer network with a large number of nodes,the Minimum Spanning Tree(MST) algorithm has disadvantages of slow convergence and low resource utilization of the whole network.Aiming at these problems,based on the Software-defined Network(SDN) architecture,a dynamic construction method for MST in network based on OpenFlow protocol is proposed in this paper.The controller can adjust no-loop forwarding topology of the underlying network dynamically according to the current network traffic distribution to achieve the whole network load balancing.Simulation result shows that the whole network traffic distribution is more balanced after the topology adjustment,and delay jitter and packet loss ratio are greatly improved.
  • HU Rubei,JIANG Guoping,SONG Bo
    Computer Engineering. 2016, 42(8): 91-95,100. https://doi.org/10.3969/j.issn.1000-3428.2016.08.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the case of only knowing nodes local information,using the weighted networks’ characteristics,this paper proposes an Improved Acquaintance Immunization strategy based on Weight-priority(IAI-WP).Using the classic susceptible-infected model and considering the difference of the viral spreading probability between different nodes,simulation results in artificial networks and real networks show that,in weighted networks,the immunization strategy gets lower density of infected individuals and has better effect than the classic acquaintance immunization.And the strategy needs lower computational complexity and less nodes information than the targeted immunization.Moreover,it is shown that the more obvious the preferential attachment in BBV network is,the better effect of IAI-WP is.
  • LI Zuohui,CHEN Xingyuan
    Computer Engineering. 2016, 42(8): 96-100. https://doi.org/10.3969/j.issn.1000-3428.2016.08.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traitor tracing and revocation are crucial to the use of Attribute-based Encryption (ABE).ABE scheme with Generalized Wildcards (GWABE) is a convenient way to solve the problems.Since the decryption cost of existing GWABE scheme increases linearly with the number of attributes used in decryption,an ABE scheme with Fast decryption and Generalized wildcards (FGWABE) is proposed with the assistance of mathematical properties of bilinear group.This scheme is proven secure from the decisional q-parallel Bilinear Diffie-hellman Exponent (q-BDHE) assumption.Performance analysis result shows that the ciphertexts of this scheme can be decrypted with a constant number of pairings.When one attribute is used in decryption,this number in FGWABE is the same as that of Attribute-based Traitor Tracing(ABTT) scheme and GWABE scheme.FGWABE is more efficient as the number of attributes increases.
  • YE Ting,CHEN Kefei,SHEN Zhonghua,MENG Qian,ZHANG Wenzheng
    Computer Engineering. 2016, 42(8): 101-106. https://doi.org/10.3969/j.issn.1000-3428.2016.08.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Welch-Gong(WG) sequences have good randomness,including long period,balance property,ideal 2-tuple distribution,two-level autocorrelation,three-level cross correlation with m-sequences,and linear complexity increasing exponentially.For the WG transformation,the odd term of polynomial function is studied.Considering the complexity of polynomial function by WG transformation,this paper extends the specific five-term function to general three-term function in WG transformation,and analyzes that the new sequences still have good randomness and low linear complexity.It selects a specific instance to analyze the hardware implementation of WG cipher based on the three-term function,and gives a certain reference value for the design of the algorithm.
  • BAO Chuansong,XU Yan,HUANG Conglin
    Computer Engineering. 2016, 42(8): 107-111,116. https://doi.org/10.3969/j.issn.1000-3428.2016.08.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the openness of wireless network,user identity and service information may leak in the interaction between mobile devices and servers.Therefore,this paper puts forward a payment mechanism to protect the privacy of mobile network users.In this mechanism,servers adopt the method of Ciphertext Policy Attribute Based Encryption(CP-ABE) to send results to users,which is more efficient than using the traditional encryption method.Furthermore,by using pseudonyms,multiple users can make an joint request to get one or more services,which reduces the cost for users while protecting their privacy.Analysis results show that the mechanism can protect the privacy of user identity,realize secure communication,and reduce the cost of server-side computation and communication.
  • LI Xinbin,ZHANG Zi,HAN Song,YAN Lei
    Computer Engineering. 2016, 42(8): 112-116. https://doi.org/10.3969/j.issn.1000-3428.2016.08.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the Cognitive Radio Network(CRN),Malicious User(MU) can transmit false information to the fusion center,causing the fusion center to make a false decision that the Primary User(PU) is present.In order to solve this problem,a cooperative spectrum sensing algorithm based on identification mechanism is proposed.In this algorithm,anti-counterfeiting codes are attached to each of the local spectrum information to filter out the incorrect information so as to reduce the effect of MU on cooperative spectrum sensing.The anti-counterfeiting code length is also optimized to reduce the system overhead.Further,the sensing efficiency is introduced to evaluate the detection performance of this identification mechanism to achieve balance between detection performance and system overhead.Simulation results demonstrate that compared with the traditional K-out-of-N method,this method improves the sensing efficiency effectively in the presence of MU.
  • LI Jindong,WANG Tao,WU Yang,LEI Dong
    Computer Engineering. 2016, 42(8): 117-122. https://doi.org/10.3969/j.issn.1000-3428.2016.08.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the potential security vulnerabilities of Session Initiation Protocol(SIP) and the lack of research work for exploiting SIP vulnerabilities,an exploiting solution of SIP vulnerabilities is proposed based on protocol analysis and fuzz test.On the basis of analyzing SIP protocol grammar,format and session interaction process,this method generates abnormal SIP data packets as the test case of fuzz test for the SIP messages with potential vulnerabilities,sends abnormal data randomly to trigger the test object in the process of building conversation,records abnormal information when monitoring the exception of SIP clients,and analyzes whether the exception information can be utilized.Experimental results show that this scheme can exploit more SIP vulnerabilities based on protocol analysis results and fuzz test,improving mining quantity and efficiency.
  • WU Qi
    Computer Engineering. 2016, 42(8): 123-125,133. https://doi.org/10.3969/j.issn.1000-3428.2016.08.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    While relieving the stress of local storage for users,cloud servers can eavesdrop on the stored data.Thus,users usually choose to upload data in the encrypted form to cloud servers.However,no classic encryption and decryption algorithm provides the ability of searching,which dramatically affects the use efficiency for users.Therefore,an asymmetric searchable encryption scheme is designed.It aims at fixing five flaws,including “trapdoor can be generated by anyone”,“ciphertext can be tampered with at will”,“key pairs are generated by users”,“encrypt identities”,and “S is of no use”.Analysis results show that the proposed scheme perfectly resolves the aforementioned five flaws in the previous scheme while maintaining its framework.The semantic information between communication sides is guaranteed.
  • SONG Li,XIE Gang,YANG Yunyun
    Computer Engineering. 2016, 42(8): 126-133. https://doi.org/10.3969/j.issn.1000-3428.2016.08.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    At present,most of the community partition algorithms divide the network into several independent associations.But for some real networks,the communities are not independent of each other.There is a common node among some communities,and most of the community partition algorithms cannot be divided into rational society.In view of this situation,a membership function based on the number of shared neighbors between nodes is proposed.It uses fuzzy equivalence relation to detect communities with the combination of the fuzzy clustering algorithm,dividing the nodes within the same equivalence class to the same communities by setting a reasonable threshold α.This algorithm is tested on GN benchmark,Karate Club network and Dolphin network.Results show that the network partition results of known community structure conform to the actual situation.When making community detection for Dolohin network,compared with Fast Newman(FN) algorithm,greedy algorithm using pile structure and Label Propagation Algorithm(LPA),this algorithm achieves higher modularity and can better detect public nodes in the real network.
  • WANG Tingting,DAI Weidi,JIAO Pengfei,LI Xiaoming
    Computer Engineering. 2016, 42(8): 134-138. https://doi.org/10.3969/j.issn.1000-3428.2016.08.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Nowadays,many models for community detection are designed only for static networks,which ignore the temporal information and are always not ideal to model the real world data.In order to solve this problem,an evolving community detection model based on the degree-corrected block model is proposed.According to the theory of the framework of evolutionary clustering,the model introduces a regularization term based on the community membership matrix into the objective function of the degree-corrected stochastic block model.The network cross-validation approach is utilized for model selection,so the proposed method is able to deal with evolving networks with variational numbers of communities.In this way,it overcomes the problem of assuming the number of communities as a constant,which is not consistent with real world data.Experimental results show that the model has a better performance with higher accuracy and lower error rate compared with the classical dynamic stochastic block model and the FacetNet.
  • CHEN Xia,MIN Huaqing,SONG Hengjie
    Computer Engineering. 2016, 42(8): 139-145,152. https://doi.org/10.3969/j.issn.1000-3428.2016.08.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Crowdsourcing can effectively solve a wide variety of tasks by employing the collective intelligence of distributed human population in the network.However,cheating users on crowdsourcing platforms can submit unreliable answers to obtain rewards.They degrade the quality of crowdsourcing services and restrict task resolution.Aiming at this problem,this paper proposes an automatic identification method of cheating users.It systematically analyzes cheating users’ behavioral characteristics and empirically summarizes the possible spamming types in the Baidu Crowdsourcing Platform(BCP).Based on the above analysis results,a logistic regression model is constructed to obtain objective measures of user reliability.According to the user’s reliability,the cheating users can be automatically identified.Experimental results show that compared with the baseline methods of majority voting,gold question set and SpEM method,the proposed method has higher recognition accuracy,reaching 97%.
  • LI Min,ZHANG Qin,SONG Yuecong,LIU Heng
    Computer Engineering. 2016, 42(8): 146-152. https://doi.org/10.3969/j.issn.1000-3428.2016.08.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Face and gait are typical nonlinear models,but the existing methods for dimension reduction of face and gait data are movinly linear,which causes information loss and reduces the recognition rate.To sowe this problem,a cognitive physics method for human identification is presented.The facial features and gait features are described by data field.The interaction and movement among data are used to realize self-organizaed cluster which is a nonlinear conversion way for reducing the dimensions of identity feature data.After its dimension is reduced,the sample database is sorted by the maximum potential value so that the rapid detection of discrete point and birary search for sample testing are realized.The facial features and gait features are fused in multiple layers based on the improved D-S evidence theory.Experimental result shows that the presented method can improve the recognition rate by 7% and reduce the recognition time by 40% compared with linear methods.
  • LI Zida,LIAO Shizhong
    Computer Engineering. 2016, 42(8): 153-159,165. https://doi.org/10.3969/j.issn.1000-3428.2016.08.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Maximum likelihood estimation is a classical and effective method for Bayesian network parameter learning on large samples,but it is not consistent when learning on small sample with little expertise.To address the issue,a novel method called TL-WMLE is proposed for Bayesian network parameter learning,which combines maximum likelihood,transfer learning and imbalance sample methods.The novel method uses an auxiliary classifier constructed by the SMOTE-N method and covariate migration theory,and computes the weights of source samples according to the predicted probability of the source domain by the auxiliary classifier.Then the proposed method mixes the reweighted source train sample and the target train sample to build a likelihood function on the target domain,and uses the new likelihood function to learn the parameters of the target domain via maximum likelihood estimation.Experimental results demonstrate that the classification accuracy of the proposed method outperforms that of the likelihood method on small samples.
  • QIN Jie,CAO Lei,PENG Hui,LAI Jun
    Computer Engineering. 2016, 42(8): 160-165. https://doi.org/10.3969/j.issn.1000-3428.2016.08.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    According to the large number of named entities and deep domain of feature words in military text information,this paper proposes a vector description method for domain feature words.It compresses the vector space through the optimization of word segmentation and domain feature word selection,improves the extraction rules for four important types of named entity,including time,place name,troop name and weapon equipment,and extends the word segmentation dictionary library.It improves the domain feature word selecting algorithm combining domain relevance and domain consistency,enlarges the difference between domain words and common words,and further filters domain feature words.Experimental results show that after optimizing word segmentation,the named entities and some specific vocabulary in military texts can be extracted,and the number of feature words can be reduced.The accuracy and recall rate of the improved domain feature word selecting method are increased by 20% and 16.7% respectively.The feature word vector generated by the proposed method has strong domain feature.

  • CAI Mei,LIU Bo
    Computer Engineering. 2016, 42(8): 166-169,177. https://doi.org/10.3969/j.issn.10003428.2016.08.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Since the traditional outlier detection technology based on Omeasure needs to search all paths while detecting abnormal data and it is easy to make misjudgments under the scenario of less amount of data.Hence,it has obvious defects on the efficiency and precision ratio.According to the positive feedback feature of ant colony algorithm,a method which combines ant colony algorithm and attribute correlation analysis is put forward for attribute outlier detection.The method chooses the converged paths of the ant colony as the exception paths,then computes Omeasure value of each node on those paths,and identifies the outlier based on the Omeasure values.Experimental results show that this method performs better in recall,precision and efficiency than traditional outlier detection technology based on Omeasure.

  • ZHU Jinhu,XU Zhangyan,QIAO Lijuan,XIE Xiaojun,WANG Ting
    Computer Engineering. 2016, 42(8): 170-177. https://doi.org/10.3969/j.issn.1000-3428.2016.08.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Currently,the discernibility matrix method which is based on HU is not only time-consuming but also takes up a lot of storage space,so that the efficiency of the method is not high.Some scholars use the method of pairwise comparisons between elements to construct the enriching discernibility matrix algorithm.However,the time complexity is too high.It is not suitable for the processing of big data.There are many other methods which store the discernibility element in a compressed FP tree,but they cannot get rid of those unwanted elements.For this reason,this paper introduces the idea of binary tree,gives a method of a short discernibility set to establish binary tree and a long discernibility set to seek and compare in the binary tree,and puts forward an improved enriching discernibility matrix algorithm.On this basis,it proposes the extended binary discernibility matrix,and directly extracts rules from the matrix.Experimental results show that the acquisition algorithm of the enriching discernibility matrix not only reduces the time complexity but also gets rid of the unwanted elements and reduces the storage space.

  • LI Xiaohong,XIE Meng,MA Huifang,HE Tingnian
    Computer Engineering. 2016, 42(8): 178-182. https://doi.org/10.3969/j.issn.1000-3428.2016.08.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Short text has the characteristics of sparsity and high dimension,and the existing clustering algorithm for the large-scale short text has low accuracy and efficiency.Aiming at this problem,a novel clustering method based on spectral clustering theory and spectral cut standard RMcut is proposed.According to spectral clustering theory,short text collection is constructed into a weighted undirected graph,and a document similarity matrix is constructed by calculating the similarity,which provides all information for the clustering algorithm.Two-way method is used to partition the graph into two parts iteratively.RMcut is used as the termination condition in the process of partitioning,and Prim algorithm is utilized to add nodes in the original graph into clusters for the purpose of obtaining high-quality clustering results.Experimental results demonstrate that this algorithm has high time performance and shows better clustering results than other algorithms,such as K-means algorithm,word co-occurrence clustering algorithm and immunity-based clustering algorithm.
  • LIU Taotao,MA Fumin,ZHANG Tengfei
    Computer Engineering. 2016, 42(8): 183-187,193. https://doi.org/10.3969/j.issn.1000-3428.2016.08.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The reduction result should be updated continually with the dynamic changing of data in decision table.In order to improve the efficiency of attribute reduction while ensuring the simplest results,an improved decision table reduction algorithm is introduced to acquire a simplified decision table which is equivalent to the original one.Based on that,combining the merits of positive region and discernibility matrix,this paper proposes an incremental algorithm for attribute reduction,which makes full use of the reduction results of the original decision table and only stores the discernibility elements generated by the inserted new objects.Example calculation results show that the proposed algorithm can quickly update attribute reduction results on the basis of the reduction results of the original decision table.
  • SHI Yan,LI Chaofeng
    Computer Engineering. 2016, 42(8): 188-193. https://doi.org/10.3969/j.issn.1000-3428.2016.08.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The historical search click data of a single user is sparse, which leads inaccurate query recommendation and cannot provide diverse query.Therefore, this paper takes the log of each user as a document, and uses the vector space model to calculate the similarity between the users’ documents.The frequency of user clicking the link in the historical data is considered as the preference score of each link, and the improved Euclidean distance is used to calculate the user’s nearest neighbors. The method is used to calculate the similar user set of the current user, and the historical behavior data of similar users is added to the data of a single user. Based on the naive Bayes model, the data is trained and the click-through rate is predicted between query and links. These rates are used as weight in the click graph and spreaded for recommendation generation. Experimental results show that this method obtains higher precision and mean average precision.
  • ZHANG Huiyi,XIE Yeming,YUAN Zhixiang,SUN Guohua
    Computer Engineering. 2016, 42(8): 194-198,205. https://doi.org/10.3969/j.issn.1000-3428.2016.08.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional CHI-square feature selection method does not take into account the category number of words in imbalanced data sets,the frequency of words,the intra-class and inter-class distribution of words,so that it fails to choose valid feature words for different categories.To solve this problem,a CHI-square feature selection method based on probability is proposed.It is used to measure the frequency of words and documents by probability of words and documents,and calculates the frequency factor of categories,the concentration factors of words between classes,equilibrium degree factors of words in the same classes and the concentration factors of documents between classes.The initial value of CHI-square is adjusted by these factors.The difference degree factor of different classes for the same word is used to make the improved CHI-square select more efficient words.Text classification experiment results show that,compared with the CHI-square feature selection method without improvement,the proposed method improves macroscopic F1 significantly,and has better classification performance on imbalanced datasets.
  • CHENG Haisu,LI Qingwu,QIU Chunchun,GUO Jingjing
    Computer Engineering. 2016, 42(8): 199-205. https://doi.org/10.3969/j.issn.1000-3428.2016.08.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that recent human action recognition algorithms based on trajectories are defective is in trajectory refinement and feature representation,this paper proposes a novel human action recognition algorithm using improved dense trajectories.It detects the motion saliency and extracts the traditional dense trajectories in the videos,and refines the dense trajectories via the analysis of the motion saliency of current frame and adjacent frames.The dense trajectories features are formed by the sequence of displacement vectors in trajectories together with Histogram of Oriented Gradient(HOG),Histograms Optical Flow(HOF)and Motion Boundary Histograms(MBH)descriptors computed in each spatio-temporal patch.In order to represent the videos better,the model of Bag of Words(BOG) model is optimized according to the saliency value distribution to get the more accurate visual vocabulary.Experimental results show that the proposed algorithm can improve the human action recognition accuracy effectively on KTH and UCF sports action datasets.
  • WANG Liutao,HUANG Miao,WANG Jianxi,MA Fei
    Computer Engineering. 2016, 42(8): 206-210,219. https://doi.org/10.3969/j.issn.1000-3428.2016.08.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the issue of the sensitivity to interpolated image in the SuperResolution(SR) algorithm of two-step adaptive dictionary learning,causing blurred images,an improved algorithm is proposed,named intersected-resolution adaptive dictionary learning algorithm.According to the redundancy of natural images,that is images of different resolution still have similar patches,the low-resolution image are used as the learning object of the dictionary directly.In order to solve the insufficiency of using a single image for dictionary learning,mirror images are used to produce a bigger dictionary.A low-resolution image is input and a high-resolution image is obtained by the sparse representation with the new dictionary.The Peak Signal to Noise Ratio(PSNR) and Structural Similarity (SSIM) measure are used to evaluate the effect of reconstruction.Experimental results show that,compared with cubic interpolation,SUSR,MSS,HLSR algorithms,the reconstructed image’s texture of the proposed algorithm can be reserved better.The image effect is more abundant and natural.And the running speed is faster.It has the highest PSNR value and SSIM value in most cases.
  • YANG Kai,CHEN Lifang,LIU Yuan
    Computer Engineering. 2016, 42(8): 211-219. https://doi.org/10.3969/j.issn.1000-3428.2016.08.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem of poor real-time performance and low efficiency of the traditional image matching algorithm based on Scale Invariant Feature Transform(SIFT),an extended cascade Locality Sensitive Hashing(LSH) image feature matching algorithm is proposed.A data space floating dichotomy hash is used to build a projection space with higher locality sensitivity than the original LSH,which achieves the partition for high dimensional feature data and makes the query process only conduct in a high similarity set so as to increase the retrieval speed.Quadratic random projection hashing is adopted in all kinds of feature sets to map the features into a higher dimensional Hamming space,which has better locality sensitivity.The measurement method is used by combination of Hamming distance and Euclidean distance to complete the fast search and precise calculation.Experimental results indicate the extended cascade LSH image feature matching algorithm speeds up 2.5 times to 3 times while getting higher matching accuracy than the traditional Best Bin First(BBF) and LSH based method.
  • ZHANG Xi,ZHANG Jie,LAI Quan,LANG Haitao,ZHANG Linjie
    Computer Engineering. 2016, 42(8): 220-226,232. https://doi.org/10.3969/j.issn.1000-3428.2016.08.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Salt and Pepper(SP) noise is easily gererated in the process of image acquisition and transmission,which can seriously affect the image quality.For the difficult processing problem of high density SP noise filtering,an approach for removing high density SP noise of images based on net function interpolation is proposed.The singular point and irrelevant characteristics of SP noise are used to detect SP noise.According to the location of SP noise,net function interpolation method is used to restore and reconstruct the image.Experimental results show that the proposed algorithm has better filtering performance than the conventional median filtering method and adaptive median filtering method.The approach not only filters the high density (density>70%) SP noise but also maintains the image details.
  • TANG Chaoying
    Computer Engineering. 2016, 42(8): 227-232. https://doi.org/10.3969/j.issn.1000-3428.2016.08.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Melanin is one of the most important pigments in human skin.At present,the most commonly used nonintrusive estimation methods are based on spectrometer which is very expensive,thus restricting the application of these methods.Therefore,a fast algorithm for estimating the distribution of melanin from visible images is proposed.Based on the principles of optics and skin biophysics,the process of skin color formation is analyzed and the inverse model is simulated by an Elman neural network,so as to obtain the corresponding melanin distribution.The algorithm is tested on skin images with different illumination and camera shooting conditions.Experimental results demonstrate that the proposed algorithm performs more accurately than other similar methods on estimation results,which can help medical workers to diagnose non-normal distribution of melanin quickly.
  • XU Meimei,XIAO Qionglin,WANG Lu,XU Wenying,MA Yue,YAN Zhijun
    Computer Engineering. 2016, 42(8): 233-236,242. https://doi.org/10.3969/j.issn.1000-3428.2016.08.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the edge breakpoint and pseudo edge problem occurred in the detection of polycrystalline silicon crystal domain based on threshold segmentation,an adaptive edge extraction optimization algorithm based on spatial gradient information is proposed.The image segmentation algorithm combining bimodal method and Otsu’s method is modified,so that the segmentation thresholds are selected adaptively according to the image characteristics.The gradient edge is extracted by using spatial gradient information of the original image.The edge breakpoint can be connected and the pseudo edge can be removed by combining the gradient edge and initial edge.Experimental results show that the proposed algorithm is simple and effective.Completed and continuous edges can be extracted adaptively for polycrystalline silicon images with various characteristics.
  • FENG Mingkun,ZHAO Shengmei,SUN Lihui,SHI Xiang
    Computer Engineering. 2016, 42(8): 237-242. https://doi.org/10.3969/j.issn.1000-3428.2016.08.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the local assessment fusion problem of image quality,the effect of different filters along with their weighting factor,window size and standard deviation on the performance of Structural Similarity(SSIM) algorithm is studied in the paper.Two algorithms named Peak Signal to Noise Ratio(PSNR) and Mean Singular Value Decomposition(M_SVD) for objective image quality assessment are improved based on optimized parameters of Gaussian filter.Overall evaluation results are achieved by assessing local blocks with Gaussian weight and fusing local assessment.Experimental results show that the Spearman Rankorder Correlation Coefficient(SROCC),Pearson Correlation Coefficient(CC) and Root Mean Square Error (RMSE) of the improved PSNR algorithm are enhanced by 3.78%,2.40% and 2.02% respectively.The above indexes are improved by 1.78%,0.67% and 4.99% respectively for the M_SVD algorithm.The two improved algorithms hold higher assessment tability and real-time performance.
  • GE Zhongxiao,XING Shuai,XIA Qin,WANG Dong,HOU Xiaofen,JIANG Tengda
    Computer Engineering. 2016, 42(8): 243-248,254. https://doi.org/10.3969/j.issn.1000-3428.2016.08.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Semi-global Matching(SGM) algorithm,the pixels surrounding the matching point cannot be fully used in the calculation of matching cost,which always causes mismatching results in the area with similar image texture.To solve this problem,an improved semi-global matching algorithm based on two kinds of tree structures is proposed.The influence of cost formulation aggregation direction is researched.Two different tree structures are added in the cost function of SGM in the proposed algorithm.Then the new matching cost is calculated with dynamic programming and aggregated from 4 paths.In the maximum allowable range of disparity,the disparity of matching point is determined by searching for a value that minimizes the aggregated matching cost.Experimental results demonstrate that the proposed algorithm overcomes the shortcoming of semi-global matching algorithm.Thanks to the characteristics of tree structures,the surrounding pixels can be used completely,which leads to higher accuracy than the original algorithm.In addition,the number of matching cost aggregation paths is reduced from 16 to 4,and the execution efficiency is 2 times higher.
  • WANG Xiaoyuan,ZHANG Hongying,WU Yadong,LIU Yan
    Computer Engineering. 2016, 42(8): 249-254. https://doi.org/10.3969/j.issn.1000-3428.2016.08.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the low brightness and low contrast of the low-illumination color image,an adaptive brightness enhancement algorithm based on visual perception is proposed by researching the automatic adjustment of the pupil and the photoreceptor cells to the environment.The overall brightness of the image is enhanced by simulating the adaptation of the pupil to the varieties of environments.The dark and light adaptive functions are designed by simulating the adaptive adjustment of photoreceptor cells for the low illumination environment.The fusion function is determined according to light distribution to adjust the global luminance adaptively.The exponential function is used in the neighborhoods to improve local contrast of the luminance component.The enhanced image is obtained by making color rendition on the enhanced luminance component.Experimental result show that the proposed algorithm can effectively improve the performance of details in dark and highlight areas.It can improve the working efficiency of the image analysis and recognition system,video surveillance system and other computer vision systems in low-illumination environment effectively.
  • PAN Lei,ZHENG Yijun
    Computer Engineering. 2016, 42(8): 255-260. https://doi.org/10.3969/j.issn.1000-3428.2016.08.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To effectively improve the quality of the degraded image in foggy days,this paper proposes a dehazing algorithm for single image based on multi-scale image fusion.After the image is converted to gradient domain,each scale value is calculated by means of transmissivity and the gradient domain is strengthened and restructured to avoid color distortion after enhancement caused by improperly chosen scale values.The corresponding weight is calculated according to the contrast and saturation of enhanced image of each scale.After enhanced by different scale,the image is fused by means of multi-resolution fusion algorithm.Finally,the clear image is obtained.Experimental results show that this algorithm is superior to current dehazing algorithms in terms of subjective visual effect and objective evaluation index,and it is more time-efficient.
  • ZHANG Guoping,ZHOU Gaiyun,MA Li
    Computer Engineering. 2016, 42(8): 261-265. https://doi.org/10.3969/j.issn.1000-3428.2016.08.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Concerning the occlusion,missing detection and noise in multi-target tracking under complex scene,an improved method of multi-target detection and tracking using key-point modeling and weakly supervised appearance model updating is proposed.Firstly,a corner detector is used to get the key-points and theirs absolute position,and background subtraction is applied to obtain binary mapping.Then,key-points are classified into significant key-points and weak key-points by mapping images.Significant key-points are used to build candidate models.Finally,weakly supervised appearance model is used to update tracking frames and realize multi-target detection.Experimental results on several video sets show that compared with GM-PHD tracking method and multi-target tracking method of continuous forward estimation,the proposed method has higher multi-target tracking accuracy and faster running speed.
  • ZHU Jueyu,YUAN Zihua,LI Feng,ZHOU Shuren
    Computer Engineering. 2016, 42(8): 266-270. https://doi.org/10.3969/j.issn.1000-3428.2016.08.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem of extremely large size of pose search space due to body parts’ high degree of freedom during pose estimation,a pose search space reducing algorithm of GrabCut based on Simple Linear Iterative Clustering (SLIC) superpixel approach is proposed.SLIC algorithm is used to segment images into superpixels which are applied as nodes to build a s-t graph.The mean value of color feature in the area of a superpixel is used as the feature value of each pixel in that area.Foreground and background Gaussian Mixture Models(GMM) are respectively built and Gaussian parameters are updated using iterative processing.Image foreground extraction is achieved using Min Cut.Pose estimation is performed only in the foreground area obtained by foreground extraction.Experimental results show that comparing to pose search space reduction method based only on GrabCut,the algorithm of GrabCut using SLIC has much better performance on both running time and pose estimation accuracy.
  • WEI Wei,WU Kongping,GUO Laigong,QIN Meng
    Computer Engineering. 2016, 42(8): 271-276. https://doi.org/10.3969/j.issn.1000-3428.2016.08.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the super-resolution reconstruction of images,a super-resolution reconstruction algorithm for a single image based on joint nonnegative dictionary learning is proposed in this paper,and it is applied in the super-resolution reconstruction of remote sensing images.Using existing high-resolution images,high-resolution and low-resolution samples are obtained by preprocessing.Joint nonnegative dictionary training technology is proposed,and high-resolution dictionary and low-resolution dictionary are obtained by training high-resolution and low-resolution samples,respectively.Super-resolution remote sensing image is recovered by these dictionaries,and computation complexity is analyzed.Experimental results show that,compared with bicubic interpolation method,Joint Dictionary Training(JDT) algorithm and coupled dictionary training algorithm,the proposed algorithm requires lower computation cost to achieve better reconstruction effect.
  • ZHANG Lei,YU Fengqin
    Computer Engineering. 2016, 42(8): 277-281,288. https://doi.org/10.3969/j.issn.1000-3428.2016.08.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Object tracking algorithm via Spatial-temporal Context(STC) suffers from drift when the object is under occlusion,so this paper proposes an improved object tracking algorithm via STC based on confidence map property.Three types of combinations of sub-block features are used to represent the object.The regions corresponding to the multiple peak points in confidence map are regarded as the candidate regions.Then features of the regions are extracted to match the object template to find out which is most similar with the object.Sequential Monte Carlo method is used to obtain the best object region.Finally,the learning rate of STC is adjusted through object sub-block occlusion ratio to reduce the effect of occlusion.Simulation results show that the proposed algorithm can still track the object accurately in case of rapid movement or occlusion,compared with the tracking algorithm via STC and the compressive tracking algorithm.
  • CHENG Xianbao
    Computer Engineering. 2016, 42(8): 282-288. https://doi.org/10.3969/j.issn.1000-3428.2016.08.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the connection of occlusion problem in multi-target tracking and social interaction model,an improved Simplified particle Swarm Optimization(SSO) algorithm of multi-group social model with inter-group dynamic information is proposed,which is used for multi-target tracking.On the basis of diversity of particle swarm,a new swarm is initialized,predicting speed of target.Combined with multi-swarm optimization,the update equation of SSO is modified by continuity information and targets,that is the target speed and position are updated.In order to adapt the target’s entering and leaving the scene,strategies for initializing new swarms and terminating iteration are constructed.The effectiveness of the proposed algorithm is verified by experimental results on CAVIAR data set,PETS2009 data set and Oxford data set.Compared with color-based particle filtering algorithm,histogram-based algorithm,local sparse algorithm and Gaussian density function algorithm,the proposed algorithm improves at least 10% in multi-object tracking accuracy,and the number of most tracking trajectory increases by about 8%.The proposed algorithm can track multiple targets robustly.
  • LIU Yanghao,XIE Linbo
    Computer Engineering. 2016, 42(8): 289-293. https://doi.org/10.3969/j.issn.1000-3428.2016.08.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to accurately extract the spatial information of objects to realize 3D reconstruction of the image,the camera lens distortion model and the lens projection model of the camera are studied.Based on the calibration method proposed by Zhang Zhengyou,an improved nonlinear calibration method for cameras based on coplanar points is presented.The distortion parameters are obtained by using the 3D coordinates of the chessboard corners and the corresponding image coordinates.They are optimized by the nonlinear least square method,thus obtaining the accurate camera calibration parameters.Experimental results show that the improved camera calibration method has higher calibration accuracy and calibration speed compared with the Zhang Zhengyou calibration method,which is suitable for the calibration of single camera.
  • CHEN Yanjun,WANG Zhiyong,GUO Fengyi,WANG He,ZHENG Zhiqiang,LIU Yanli
    Computer Engineering. 2016, 42(8): 294-299,304. https://doi.org/10.3969/j.issn.1000-3428.2016.08.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It’s of great significance to detect the looseness fault of the bolted cable joint in power supply and distribution system for the prevention of electrical fire.Therefore,a looseness fault detection method based on Wigner-Hough Transform(WHT) is proposed.The pick-up current is transformed by Wigner-Ville Distribution(WVD) to obtain its time-frequency characteristics.The feature information is extracted by Hough transform and both the peak value and the interval length of peak value change rate are calculated.There are significant differences among those WHT peak values under different experimental conditions,so the WHT value can be used to identify the looseness fault,and the interval length of peak value change rate can be used to reveal the looseness degree.Experimental results show that this method can detect the looseness fault of bolted cable joint effectively.
  • YI Qingming,LUO Chong,SHI Min
    Computer Engineering. 2016, 42(8): 300-304. https://doi.org/10.3969/j.issn.1000-3428.2016.08.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In high dynamic environment,the Global Positioning System(GPS) software receiver has low tracking accuracy and is prone to loss of lock.To solve these problems,a high accuracy and fast tracking method based on Maximum Likelihood Estimation(MLE) is proposed.The design process of the high dynamic tracking loop structure based on MLE is analyzed,and the maximum likelihood estimation method based on two-step approximation is proposed,which can quickly acquire the code phase error estimation value.The interpolation method is used to get higher accuracy of the Doppler frequency error estimation.Theoretical analysis and simulation results show that compared with the traditional maximum likelihood estimation method,the improved maximum likelihood estimation method reduces the time consumption by 27.3%,and has higher tracking accuracy.
  • LI Wei,WANG Li,YI Zichuan,ZHOU Guofu
    Computer Engineering. 2016, 42(8): 305-310,315. https://doi.org/10.3969/j.issn.1000-3428.2016.08.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to implement mobile terminal control of household sockets over the ZigBee network,this paper presents a design plan for multi-functional smart socket system based on the ZigBee technology.This system includes mobile client,smart socket node and intelligent controller.The mobile client is developed with the platform of Android;the smart code consists of CC2530 and controlling circuit;and the intelligent controller is designed with DM9000,STM32 and CC2530.With 5 V/2 A double USB charging jack,the smart socket system also has various functions including power counting,time switch,long-range control,infrared remote control and voice control.Experimental results show that the whole system adopts modular design,and can have a convenient function extension through USB jack.It has the advantages of convenient use,functional diversity,and strong expansibility.
  • FENG Jingwei,CHE Mingming,MA Wei,DAI Wen,HAN Mei
    Computer Engineering. 2016, 42(8): 311-315. https://doi.org/10.3969/j.issn.1000-3428.2016.08.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing indoor navigation solutions usually rely on pre-installed sensor networks,and are not suitable for emergency agents.A new indoor personal navigation method based on laser range finder and Micro-inertial Measurement Unit(MIMU) which does not need pre-installed sensor networks is proposed.Simultaneous Localization and Mapping(SLAM) is introduced in indoor personal navigation.This method can continuously obtain indoor maps,and scan matching is used as navigation algorithm.Experimental results show that the proposed method can achieve preciser indoor navigation in different environment compared with inertial navigation.
  • WEI Juan,WANG Chongke
    Computer Engineering. 2016, 42(8): 316-321. https://doi.org/10.3969/j.issn.1000-3428.2016.08.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problems of occlusion and identification switching in multi-object trackeing,an extension scheme from a single-object tracker to a multi-object tracker is proposed based on color particle filtering.When the trajectories are very close to each other,the proposed adaptive conflict prevention model is utilized to separate trajectories close to each other.When object occlusion occurs,the overall model of the tracker is divided into several parts,and the visible parts are used to execute tracking and occlusion reasoning.When the objects are similar in appearance,monitoring method is applied to handle the occlusion.When the object is completely occluded,the particles around the occluded objects are re-initialized to collect the re-emergencing object and realize multi-object tracking.Experimental results show that the proposed tracker has obvious advantages in the aspect of False Negative Rate(FNR),False Positive Rate(FPR),Mis-matching Rate(MMR),and Muti-object Tracking Accuracy(MOTA) compared with MIT and LSAM trackers.