Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 February 2015, Volume 41 Issue 2
    

  • Select all
    |
  • ZHAO Yuyan,CHEN Haibao,ZHAO Shenghui
    Computer Engineering. 2015, 41(2): 1-6. https://doi.org/10.3969/j.issn.1000-3428.2015.02.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the cost and flexibility,building a temporary private cloud in public cloud to replace the local physical cluster is an effective way for users to run a large number of parallel jobs during a particular period. However,the existing scheduling algorithms are not suitable for temporary private cloud hosting tightly coupled parallel applications, because they can not use the logic topology information of temporary private cloud,which probably results in the performance degradation of such type of applications. To mitigate such impact, a logic-topology-aware scheduling algorithm is presented. It builds the logic topology of virtual machines according to bandwidth and latency among them, makes scheduling decisions based on the logic topology and the information of applications. Experimental results show that compared with Round Robin and Random algorithms of open-source cloud software,logic-topology-aware scheduling algorithm improves the performance of tightly coupled parallel applications.
  • CUI Jingsong,LU Haoyu,GUO Chi,HE Song
    Computer Engineering. 2015, 41(2): 7-11,16. https://doi.org/10.3969/j.issn.1000-3428.2015.02.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem that the fault troubleshooting of cloud platforms is not timely,and guarantee the continuity of cloud services,this paper designs and implements a virtualization fault detection and recovery system based on event-driven mechanism,which is on the open-source cloud platform———OpenStack. The system is composed of GUI layer,scheduling layer,logic layer and functional layer,and processes the information transmitted in the system by timing as an event on the basis of event-driven mechanism. It mainly uses perception module,policy module and execution module,which call OpenStack API and Libvirt API to interact with the management of virtual machines. The established fault detection recovery system mainly includes information acquisition,analysis and processing,fault recovery,and by real-time detection of the cloud platform’s runtime environment,it can obtain state parameters,analyze the parameters and develop countermeasures according to established policy,and achieve automatic fault recovery. Experimental results show that the system can detect and recover cloud platforms’ fault with agentless method,enhance the security of cloud environments,and improve the high availability of cloud platforms.
  • WEI Yun,CHEN Yuanyuan
    Computer Engineering. 2015, 41(2): 12-16. https://doi.org/10.3969/j.issn.1000-3428.2015.02.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of resource scheduling problem in cloud computing,a parallel scheduling model is proposed,which can improve the task parallelism while maintaining the serial relationships between tasks. Dynamic tasks submitted by users are divided into sub-tasks in some serial sequences,and it puts into scheduling queue with different priorities according to running order. For these tasks in the same priority scheduling queue,an improved Delay Time Shortest and Fairness Ant Colony Optimization(DSFACO) algorithm is applied to schedule. Considering both fairness and efficiency,DSFACO algorithm applies to subtask scheduling problem to realize shortest delay time,thus improves the user satisfaction. Experimental results show DSFACO algorithm is better than the TS-EACO algorithm in fairness, efficiency and task delay time,and it can realize the optimal scheduling in cloud computing.
  • YANG Dan,LI Chaofeng,YANG Jian
    Computer Engineering. 2015, 41(2): 17-20,25. https://doi.org/10.3969/j.issn.1000-3428.2015.02.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the utilization rate of cloud resource scheduling and keep load balance,chaos firefly algorithm is proposed for resource scheduling in cloud computing. Taking into account task completion time, task completion efficiency and task completion safety,a cloud resource allocation model is established. Through introducing chaos algorithm into firefly algorithm, disturbing individuals and strengthening rate of convergence, it lowers the probability of local optimum. Lagrange relaxation function is introduced for lack of resource scheduling in cloud computing. Simulation experimental result shows that the improved algorithm can effectively avoid imbalance in resource allocation,shorten completion time of task and enhance integrated processing capacity of system.
  • CHEN Jie,YU Yonggang,LIU Mingheng,PAN Shenghe,XU Kefu
    Computer Engineering. 2015, 41(2): 21-25. https://doi.org/10.3969/j.issn.1000-3428.2015.02.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development of cloud computing,massive data can be very easy to be stored and managed. Security Management Platform(SMP) is a support platform which realizes security management normalized operation. In a real application,this platform needs to process the massive information which generates from security device in real time.Considering the problem of low query efficiency,an efficient log analysis system based on the cloud computing for SMP is presented. It introduces the Hadoop distributed system infrastructure,and in the meantime,based on the study of transformation mission of the Hive,Hadoop Distributed File System(HDFS) and MapReduce are applied to effective storage and query of massive log. Experimental results show that,using proposed system can obtain a general increase in the query performance by about 90% compared with the existing Oracle storage method,and it can also further improve response speed of the SMP.
  • LIU Qi,XIAO Yanghua,WANG Wei
    Computer Engineering. 2015, 41(2): 26-30. https://doi.org/10.3969/j.issn.1000-3428.2015.02.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In a usual way for automatic generic relation extraction from texts,only some simple information,such as positions and frequency are recorded. And enormous context information is ignored,which is very helpful to recognize typical relationship. A new approach is proposed to recognize typical generic relationship from candidates extracted Internet texts. Abundant semantic information is kept while relations are captured. It integrates both natural language features of entities and concepts to constitute a entity feature set,calculates the possibility of any entities belong to different concepts based on na?ve Bayesian,and recognizes typical generic relationship. Experimental result proves,as for judging whether a generic relation is typical, compared with the frequency-based recognizing method, the method improves the recognition accuracy by more than 5% .
  • HE Yihui,PAN Mingcong,XU Wei,PENG Hui
    Computer Engineering. 2015, 41(2): 31-35. https://doi.org/10.3969/j.issn.1000-3428.2015.02.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Decomposition and allocation of combat tasks play important roles in process of command and decision. With the increasing complexity of the battlefield environment,methods of task allocation under certain environment are no longer able to meet the needs of the commander. In order to solve this problem,this paper analyzes the uncertainty existing in the process of decomposition,allocation and execution of tasks,and gives the formal description of each uncertainty with mathematical method. On this basis,it proposes a method of calculating the fitness based on probabilistic reasoning. This method can be used in intelligent algorithm to assess the scheme of task allocation in quantity. Finally,the example verifies the reasonability and effectiveness of the proposed method.
  • CAO Yue,GU Naijie,REN Kaixin,ZHANG Xu,WU Zhiqiang
    Computer Engineering. 2015, 41(2): 36-40,46. https://doi.org/10.3969/j.issn.1000-3428.2015.02.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve interactive performance of Linux in multi-core systems,this paper designs and implements an improved task scheduling algorithm named Group Fair Schedule (GFS). According to the hardware characteristics of multi-core system, GFS allows to configure a group of CPUs with close affinity to share one task run queue automatically,so that the cost of competitive access,task migration inside a group and run queue load balance between groups can be weighed, and reduces scheduling delay. GFS gives priority to awakening tasks so that interactive performance of multi-core systems is improved. Test results show that GFS decreases the average response time of interactive tasks under different background loads, and improves interactive performance of multi-core systems effectively.
  • ZHANG Binlian,XU Hongzhi
    Computer Engineering. 2015, 41(2): 41-46. https://doi.org/10.3969/j.issn.1000-3428.2015.02.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the continuous expansion of the scale of the multi processor system,the issue of energy consumption becomes more and more important. How to save energy becomes an important problem to be solved. Based on the multiprocessor system,On-line Energy-efficient Real-time Scheduling Algorithm(OERSA) aiming at a random task is proposed. According to the arrival time and calculated amount of the existing task,the algorithm estimates the executive voltage / frequency of the new task in the idle processor by using statistical methods,which can meet the deadline and save the energy effectively for not yet arrived tasks. At the same time,considering the task executed on a single processor,the algorithm first calculates the average voltage / frequency required to perform these tasks,thus making all the task execution speed equal as much as possible. When some tasks can not meet the deadline requirements,voltage / frequency for previous not executed tasks will be adjusted high. Experimental results show that OERSA has obvious advantages in the aspect of meeting deadlines and energy consumption saving compared with EDF,HVEA,MEG and ME-MC algorithm.
  • MA Yanfang
    Computer Engineering. 2015, 41(2): 47-51,56. https://doi.org/10.3969/j.issn.1000-3428.2015.02.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development of Internet,the environment of software is gradually changed to the open and dynamic environment. So,the results of software executing on different environment affect on the trustworthiness of software. In some special fields,it is necessary to establish an experiment environment in order to test the property of software. So,it is important to evaluate the degree to which experiment environment approximates the real environment. This paper uses measure theory in topology and partial order in domain theory,the quantitative model to describe the approximate degree between environment is proposed based on process algebra. The quantitative model to compute the approximate degree between environments is presented based on the partial iteration between software and its environment. Furthermore,some examples is showed to verify the model and some algebraic properties is proved.
  • ZHANG Yifan,YIN Shuxiang
    Computer Engineering. 2015, 41(2): 52-56. https://doi.org/10.3969/j.issn.1000-3428.2015.02.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing privacy preserving proximity testing algorithms usually quantize user’s location according to grid size,which makes these algorithms have low accuracy. Aiming at this problem,this paper proposes two accurate and secure proximity testing algorithms. A user transforms location to two parts,the coordinates inside the grid and the grid index,and sends them both after encryption. Then the server computes all possible grids which satisfy the query according to the encrypted coordinates. The user judges whether the two users are in proximity according to the response from the server. Experimental result shows algorithm 1 runs fast and sends fewer messages,algorithm 2 is more secure,but it needs more computation and communication cost,and both users are required to be online. User can choose the more proper algorithm based on his trust in the server,the query performance,and the scenario.
  • HU Qian,WANG Chao,WANG Haixia,WANG Dongsheng
    Computer Engineering. 2015, 41(2): 57-62,75. https://doi.org/10.3969/j.issn.1000-3428.2015.02.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Fault injection provides an effective method to evaluate the reliability of system,which is a complex topic in multicore situation. There are many simulation-based fault injection tools now,most of which are implemented by Field Programmable Gate Array(FPGA) and Very High Speed Integrated Circuits Hardware Description Language(VHDL), with limited fault models. Based on the widely used system simulator Simics in computer architecture,this paper designs and implements a system level fault injection platform,supporting different firmware,OS and applications. It can inject faults into several components,with different fault models(including transient faults,permanent and intermittent faults) and most fault types. Further more,it integrates fault detection module into the system. After observing of the propagation of hardware faults in system,it analyzes the effect of different components,fault models on system level,inspiring fault detection,and finds that transient faults have a little impact on system,while permanent faults seriously interrupt the running and intermittent faults performs differently on different components.
  • LIU Chunhong,XU Lihua,YAN Ting,YANG Zongyuan
    Computer Engineering. 2015, 41(2): 63-69,80. https://doi.org/10.3969/j.issn.1000-3428.2015.02.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Function calls with unavailable source codes can not be appropriately handled by symbolic execution in traditional concolic testing. To solve this problem, this paper proposes a segmented concolic testing method, which weaves,by demand,symbolic execution,segmented symbolic execution and concrete execution throughout the testing process. These function calls are treated as separate code segments,dynamically executed and analyzed to derive their corresponding program semantics. To demonstrate the effectiveness of the proposed method, this paper implements sCREST,a segmented concolic testing system based on CREST, and experiments with five open source systems. Experimental results show that segmente concolic testing is able to generate test data that covers more branches than that of the traditional approaches.
  • XIONG Zhigang,LI Jing,SU Zhenyang,PENG Weiping
    Computer Engineering. 2015, 41(2): 70-75. https://doi.org/10.3969/j.issn.1000-3428.2015.02.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With Service-oriented Architecture ( SOA) being widely used, a large number of legacy systems with different communication technologies are accessing the Enterprise Service Bus(ESB) by the way of a service. In some real-time field with higher requirements,these information systems generally adopt the Data Distribution Service(DDS) communication technology. When these systems are accessing the ESB,it meets the communication conversion problem between ESB and DDS. On this basis, this paper designs a communication adapter model which is a three-tier architecture,including ESB messaging layer,messages and packets conversion layer and DDS packets publish-subscribe layer. According to the message and packet name,adapter traverses the mapping file and achieves conversion function between packet and message. Adapter traverses the information model definition file. The results after conversion is converted into a standard format used for communication. This paper builds a hybrid communication system between ESB and DDS to test the performance of the adapter model. Experimental results show that the communication conversion costs lower than 100 ms and meets the real-time requirements.
  • DU Xiaoyu,LI Hui,ZHOU Lin
    Computer Engineering. 2015, 41(2): 76-80. https://doi.org/10.3969/j.issn.1000-3428.2015.02.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The coverage is a fundamental issue and an important indicator of the service quality in Wireless Sensor Network ( WSN ). For three-dimensional underwater sensor network model, an virtual force algorithm based on directional movement is proposed that simplifies virtual force as the repulsion force only by neighboring nodes. This paper defines the ideal position relatively of the two nodes’ position while one of sensing spheres of two neighboring nodes is tangent to the other. Virtual force is proportional to the distance moved from original position to the ideal position. The movement distance is determined by the resultant of virtual force which acts on the node. Experimental results show that the algorithm can effectively optimize the layout of underwater sensor networks and improve the network’s coverage rate.
  • DENG Sui,DENG Hanlin,PAN Qiang,LIU Haitao
    Computer Engineering. 2015, 41(2): 81-84. https://doi.org/10.3969/j.issn.1000-3428.2015.02.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the accuracy of indoor localization using Received Signal Strength Indicator (RSSI) fingerprints,a new method of introducing the jittering of RSSI fingerprints is proposed,which defines the calculating method of the RSSI fingerprints jittering,and applies it to measure the distance between the reference tags and tags that are being tracked,and computes the RSSI fingerprints of the virtual tags. Test results show that the average localization accuracy is about 0. 35 m ~ 0. 88 m higher by the improved algorithm than RFFP,and 0. 38 m ~ 0. 94 m higher than LANDMARC while the time-consuming increases by only 1% and 12% respectively.
  • HUANG Yuan
    Computer Engineering. 2015, 41(2): 85-90,95. https://doi.org/10.3969/j.issn.1000-3428.2015.02.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To achieve low-latency, high-reliability data gathering in Wireless Sensor Network ( WSN ), this paper formulates the joint problem of tree construction, link scheduling and power assignment for data gathering into an optimization problem,with the objective of minimizing data gathering latency. It divides the problem into two sub problems:construction of a low-latency data gathering tree,jointly link scheduling and power assignment for the data gathering tree. This paper proposes a polynomial heuristic algorithm for each sub problem and conducts extensive simulations. Simulation results show that the proposed algorithm achieves much lower data gathering latency than existing data gathering strategies while guaranteeing high reliability.
  • HONG Xuehua,MA Yongtao,LIU Kaihua,LIU Chao,HUANG Jianyao
    Computer Engineering. 2015, 41(2): 91-95. https://doi.org/10.3969/j.issn.1000-3428.2015.02.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the performance of energy detection algorithm,a new spectrum sensing algorithm based on allphase Fast Fourier Transform(FFT) is proposed. Considering all possible combination cases of the data segment central sample points,the data preprocessing of all-phase FFT can reduce spectral leakage caused by data truncation and improve the accuracy of spectral analysis. Taking energy detection as an example, by Matlab, spectrum sensing based on conventional FFT and all-phase FFT are respectively carried out in terms of theoretical analysis and simulation. The simulation results show that under the same SNR,spectral interference of the latter is smaller,so as to reduce the probability of false detection. At the same false alarm rate,the latter’s probability of detection is bigger. Because spectral leakage is effectively suppressed,the spectrum information obtained is closer to the real one. Therefore,the performance of all-phase FFT energy detection algorithm is better than the conventional energy detection algorithm.
  • LIU Xiaoqian,ZHAO Yiming
    Computer Engineering. 2015, 41(2): 96-99. https://doi.org/10.3969/j.issn.1000-3428.2015.02.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on Multivariate Quadratic ( MQ) problem,Multivariate Public Key Cryptosystems ( MPKC ) are regarded as systems resisting quantum attacks. This paper analyzes a ring signature scheme based on MQ and points out that there exist some issues such as secret key leakage and incorrect security proof. To solve these problems,this paper proposes a variant of ring signature scheme with provable security by applying different key generation methods to ring signer and the remaining ring members. The scheme removes the dependence on IP problem as much as possible,gaining higher security by direct reduction to MQ problem. This paper gives detailed analysis and security proof of the new scheme from the aspects of correctness,anonymity and unforgeability in the standard security model of ring signature. Compared with the original scheme,the scheme is more complete both in analysis and security proof.
  • LU Xiaoling,ZHONG Hong,LIN Qunfeng,SHI Runhua
    Computer Engineering. 2015, 41(2): 100-106. https://doi.org/10.3969/j.issn.1000-3428.2015.02.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development of Mobile Ad hoc Network (MANET),the study of trust evaluation in clustering structure becomes more popular nowadays. Most existing models use single criterion for head node selection,and have difficulties with assigning and updating trust value of the different roles. This paper proposes a new multi-role trust evaluation model. The node has four roles:cluster head,cluster members,gateways and agent. Thus,to compute the trust of cluster head,a new way comes out by using trust value,mobility and correlation degree of node as selecting criteria of cluster heads,gateways and agent. Inside each cluster,the trust of cluster head is computed by the agent node,for the trust between the clusters, a simplified network, which consists of cluster heads, gateways and agents, according to the characteristics of the social network relationships,assigns credit to different nodes,the agent’s recommends is taken into calculating the trust. NS2 simulation results show that the model can effectively detect and isolate the malicious nodes, thus ensure the security of the network.
  • YANG Jingjing,LU Jianzhu,JIANG Junhui
    Computer Engineering. 2015, 41(2): 107-112. https://doi.org/10.3969/j.issn.1000-3428.2015.02.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    How to improve the computational efficiency and how to reduce the communication cost while achieving the goals of reliability,anonymity and auditability are becoming important research issues of Vehicular Ad Hoc Networks (VANETs). Due to computational efficiency and communication cost of existing anonymous authentication schemes,a communication-efficient threshold anonymous authentication scheme is proposed in this paper. It signs the message by using an identity-based signature algorithm and the original message can be recovered from the signature,which can improve the computational efficiency and reduce the communication cost. The threshold-based forwarding mechanism ensures reliability of information. The anonymity of the message sender is achieved by using pseudonym. In order to achieve auditability,this system is able to track and revoke malicious nodes. Analysis result indicates that the proposed scheme not only satisfies reliability, anonymity and auditability, but also has lower communication cost and better computational efficiency compared with the existing threshold anonymous announcement in VANETs.
  • DANG Jiali,YU Huifang
    Computer Engineering. 2015, 41(2): 113-116. https://doi.org/10.3969/j.issn.1000-3428.2015.02.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Some existing group signatures can not resist the defects of exculpability and unforgeability. In order to solve this problem,this paper applies the Chinese remainder theorem to group signature,and proposes a secure group signature scheme using Chinese remainder theorem. Using the mathematical properties of Chinese remainder theorem,only simple calculation to some important secret information integration can better ensure the privacy of the private key and identity. At the same time,this scheme can effectively control the length of the data in the process of calculation,thus,it simplifies the process of calculation. Under not changing the secret key of other group member,the group member can be added or revoked efficiently. Analysis results show that this scheme has anonymity,unforgeability,traceability,coalition-resistance and coalition-replay attacks.
  • LI Chunmei,YANG Xiaodong,ZHOU Sian,LI Yan,WANG Caifen
    Computer Engineering. 2015, 41(2): 117-121,128. https://doi.org/10.3969/j.issn.1000-3428.2015.02.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A secure,efficient and fined-grained access control scheme using the attribute-based encryption algorithm is proposed to solve the problem of privacy protection in social network,and an architecture is designed in social network.The proposed scheme utilizes a Linear Secret Sharing Scheme(LSSS) to construct the access policies in order to achieve flexible access structure. The technique transfers most of computing overwork involved in re-encryption to social network platform,which greatly reduces the computational cost of users while keeping the data security. The social network platform analyzes the relationship between the unauthorized users and authorized users to determine the access rights of unauthorized users. The proposed scheme can achieve the transitivity of the access rights. Finally,the performance and security of the proposed scheme are analyzed. Analysis results show that,compared with existing privacy protection schemes based on encryption technique,this scheme can improve efficiency in expression and decryption efficiency.
  • YANG Yang,ZHU Xiaoling,DING Liang
    Computer Engineering. 2015, 41(2): 122-128. https://doi.org/10.3969/j.issn.1000-3428.2015.02.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A new verifiable threshold signature scheme without a trusted center is proposed based on Chinese Remainder Theorem(CRT). The scheme do not needed the trusted center. Each participant is regarded as a distributor,and generates his own secret share by exchanging secret share shadows with the others,which can avoid the trusted center’s authority deception. Participants use their own secret shares to generate the partial signatures,and the group signature is composed of the partial signatures,which means the group private key is not used or exposed directly,so that the group private key’ s reusability can be ensured. Based on the discrete logarithm problem,the scheme constructs the secret shadow verification formula,so that it can identify the participants’ mutual cheating to prevent the malicious fraud of the participants effectively. Experimental results show that compared with the secret sharing schemes based on Lagrange interpolation,this scheme is more efficient.
  • LIU Zhen,YANG Qiliang,YANG Bo
    Computer Engineering. 2015, 41(2): 129-134,140. https://doi.org/10.3969/j.issn.1000-3428.2015.02.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most of the existing signcryptions are based on bilinear pairing,but the signcryption without bilinear pairing is a research of cryptography,because the pairing operation requires a lot of computations,and it can not anonymous the proxy signcrypter. The signature scheme based on quadratic residue is widely used with its advantages such as simple description,resistance of chosen ciphertext attack and high efficiency. Its efficient is higher compared with signcryption schemes based on bilinear pairing. This paper adds anonymity to the scheme based on quadratic residue to realize anonymous proxy signcryption. Analysis results show that the scheme not only provides anonymity,but also provides public verifiability.
  • ZHENG Xuedong
    Computer Engineering. 2015, 41(2): 135-140. https://doi.org/10.3969/j.issn.1000-3428.2015.02.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The optimization of DNA coding plays an important role in DNA computing. In this paper,the constraints of the optimization of DNA coding are described,an h distance is introduced over the set of single DNA strands and a clustering-based niche technique is applied to construct the micro-genetic algorithm which is used to solve the problem of the optimization of DNA coding. In the algorithm,a similarity function between different DNA sequences is defined based on the h distance. The base letters are encoded by quaternary integers,the DNA coding sequences are encoded by the vector of quaternary integers as individuals and the population is encoded by the matrix of quaternary integers. Several genetic operators are constructed based on modulo 4 arithmetic operation and the concrete computing results are presented. Experimental results show that compared with the latest results,the algorithm can get better DNA coding sequences and improve the efficiency of computation.
  • ZHOU Bin,HUANG Yuanliang,HUANG Wei
    Computer Engineering. 2015, 41(2): 141-144. https://doi.org/10.3969/j.issn.1000-3428.2015.02.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For solving the diagnose cost and time applied to traditional fault tree analysis,based on studying the special disciplinarian of fault tree,this paper adopts the depth first left most searching algorithm to decompose the fault tree into modules,and decreases the scale of fault tree analysis. Combined with if-then-else operator,it converts the most left and bottom module binary decision diagram. It applies the depth first left most searching algorithm to acquire the cut set and the minimum cut set of the binary decision diagram,and then uses a new bottom event with the same failure probability to replace the module to generate a new fault tree. The probability that system elements occur faults is obtained by comprehensive analysis of from bottom to up and from left to right,and the fault tree analysis is finished. By analyzing fault diagnosis,it is verified that the method improves the speed of diagnose and decreases the cost of diagnose.
  • ZHANG Zhichang,CHEN Songyi,LIU Xin,MA Huifang
    Computer Engineering. 2015, 41(2): 145-150. https://doi.org/10.3969/j.issn.1000-3428.2015.02.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Hyponymy has many important applications in the field of Natural Language Processing (NLP) and the automatic extraction of hyponym relation from massive text datasets is naturally one of important NLP research tasks. The emphasis and difficult point of the research is how to validate a hyponym which is extracted with simple pattern matching method is really correct. By calculating the context feature similarity ( SimCF ) and Brown clustering similarity (SimBrown ), this paper proposes a novel approach of hyponymy validation. It applies a clustering on hyponym candidates,and the clustering similarity feature is obtained by combining SimCF and SimBrown. The combination coefficient of two kinds of similarity is derived based on the SimCFs and SimBrowns between all labeled training words and their hyponyms. The model can filter roughly extraction results without any existed lexical relation dictionary or knowledge base. Evaluation on CCF NLP&CC2012 word semantic relation corpus shows that the proposed approach in this paper significantly improves the F measure value compared with other approaches including pattern matching and simple context comparison.
  • ZHANG Huyin,LIU Daobo,WEN Chunyan
    Computer Engineering. 2015, 41(2): 151-156. https://doi.org/10.3969/j.issn.1000-3428.2015.02.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The current word similarity calculation does not consider in depth with the primary and secondary relationship between the distance and the depth of sememes. In addition,concept similarity is specified directly when the conceptual description expression contains specific words,which leads to unreasonable. The depth of sememes impacts on the word similarity is limited by the distance of sememes. It analyzes the statistical meanings of the concept expression in“HowNet”. Besides,word similarity calculation uses the first basic sememe that can reflect the essence of the word to replace the specific words that appear in the conceptual description expression. Based on the above,an improved algorithm of word semantic similarity is proposed in this paper. Experimental results show that the improved algorithm effectively improves the precision of word similarity calculations.
  • LIU Tao,ZHAO Peng,LIU Huiting1,2,JI Xia
    Computer Engineering. 2015, 41(2): 157-160. https://doi.org/10.3969/j.issn.1000-3428.2015.02.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The mainstreaming evaluation collocation extroction methods are based on syntactic dependency parsing. Because the grammar of most Chinese evaluation text is not normative,the syntax analysis result is unstable and affects the result of extracting evaluation collocation. To solve this problem,this paper presents an improved method of extracting evaluation collocation based on kernel sentences,which extracts evaluation collocation by combining kernel sentences and syntactic dependency. This method can significantly improve the stability of the syntax analysis result,and it also can add the analysis of the coordinative relationship among the emotional words and among the opinion targets when dealing with complex sentences. Experimental result exhibits that this method can improve the recall rate and accuracy.
  • LIN Xianghong,ZHANG Yuping,LI Zhiqiang,WANG Peiqing
    Computer Engineering. 2015, 41(2): 161-166. https://doi.org/10.3969/j.issn.1000-3428.2015.02.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Neurons are the basic building blocks of nervous systems and thus constitute the computational units of the brain. Computational modeling of neuronal morphology is significant for understanding structure-function relationships and brain information processing. This paper introduces the general computational framework of generation algorithms for three-dimensional neuronal morphology,and surveys the advance of the research on generation algorithms,which can be divided into three categories according to the difference of their generation mechanisms:reconstruction algorithms based on statistical analysis, generation algorithms based on grammar rule and growth algorithms based on biological development. By a detailed comparison,the advantages and disadvantages of these algorithms are discussed.
  • YIN Shuxiang,JIN Ting
    Computer Engineering. 2015, 41(2): 167-172. https://doi.org/10.3969/j.issn.1000-3428.2015.02.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the massive volume of graph data from a wide range of recent applications and unprecedented graph data growth,it is becoming economically appealing for data owners to outsource their data to a powerful Service Provider (SP),such as a cloud computing platform,which provides high computational query services. This paper studies a novel privacy preserving 2-hop index and query algorithm for a fundamental query for graphs namely the reachability query. It optimizes the existing method for building 2-hop index, proposes one optimizing method ( maxISCover ), and two algorithms(unifyIS and unifyLS) to build privacy preserving 2-hop(pp-2-hop) index by adding some surrogate nodes, and raises up an optimized query processing algorithm based on pp-2-hop index. Experimental results show that the index size of this algorithm based on maxISCover optimization method and the unifyIS is reduced by 1 ~2 orders of magnitude, compared with the method based on the original 2-hop.
  • HU Junfeng,CAO Jun
    Computer Engineering. 2015, 41(2): 173-177. https://doi.org/10.3969/j.issn.1000-3428.2015.02.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the classical dynamic model of passive biped walking,the globel stability of passive bipeds walking which is influenced by the environment and mechical parameters is discussed. The calculaion about stable fix points on variable parameter is given. Cell-cell map is used to compute the attractive region of the 1-periodic steady gait of the walking model with variable parameters. It is found that the robustness of passive bipeds is connected closely to the environment and mechanical parameters. Moreover,this paper also proposes the two metrics to evaluate the shape of attractive region: minimum radius and maximm radius. Simulation results also reveal the relationship between the attractive region of passive walking and the parameters like ground slope or mass ratio. Simultaneously,the robusness of passive biped walking gaits is analyized by special attracive cell and the variable tendency with the maximum reach and the mininum reach of the attractive region.
  • HE Tingnian,LI Xiaohong,JIANG Yun
    Computer Engineering. 2015, 41(2): 178-183,188. https://doi.org/10.3969/j.issn.1000-3428.2015.02.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    null

  • LV Jinqiu,YOU Xiaoming,LIU Sheng
    Computer Engineering. 2015, 41(2): 184-188. https://doi.org/10.3969/j.issn.1000-3428.2015.02.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to overcome the disadvantages that traditional Ant Colony System(ACS) is easy to fall into local optimum and low-precision for large-scale problems,this paper presents an improved ant colony optimization algorithm.The algorithm introduces α-nearest in the minimum 1-tree,which better reflects the chances of a given link being a member of an optimal tour. It improves α-nearest precision by transforming the adjacency matrix and computes a lower bound. Besides,it proposes the adaptive exploration strategy and 3-opt local search. Experimental results show that this algorithm has a better global searching ability in finding the best solutions and can obtain better solutions than ACS,etc.
  • TIAN Pan,HUA Bei,LU Li
    Computer Engineering. 2015, 41(2): 189-192,198. https://doi.org/10.3969/j.issn.1000-3428.2015.02.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    K-nearest Neighbor(KNN) is a classical problem whose computational complexity increases rapidly with the size of data set. It is an interesting research to accelerate KNN implementation on the Graphics Processor Unit(GPU) by employing GPU’s massive parallel computing power. For its heavy overhead on time,after analyzing the existing work of GPU-based KNN implementations and the architectural features of GPU,this paper efficiently parallelizes KNN on the GPU. It optimizes data access by making good use of the coalesced access power of global memory,and reduces thread serialization by filtering out as much data as possible in advance that is to be sorted. Experiments on KDD,Poker and Covertype datasets and comparisons with some existing methods show that the number of floating point arithmetic of executed per second of this distance computing method is up to 266. 37 × 109 ,and is up to 26. 47 × 109 in sort phase, which are superior to that of existed methods.
  • CHU Yanjie,XU Zhengguo
    Computer Engineering. 2015, 41(2): 193-198. https://doi.org/10.3969/j.issn.1000-3428.2015.02.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A new multi-source information search model based on multi-Agent collaboration is put forward to deal with the problem that under the real time,multi-source and huge information condition. Multi-Agent information search model centers around objects,builds the whole object model by cycling search,and gets the information that users care for. This model has higher intelligent and open-ended features, and it can make multi-source information searing more comprehensive and accurate. Q-learning-based collaborative control algorithm is proposed. The algorithm designs different decision-making methods for Markov objects and non-Markov objects. Experimental results show that the algorithm has better information search ability than probability transfer matrix and probability statistics algorithms.
  • ZHANG Chengbin,WANG Kaifu
    Computer Engineering. 2015, 41(2): 199-202. https://doi.org/10.3969/j.issn.1000-3428.2015.02.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to shortcoming of conventional morphological filter in filtering high density salt and pepper noise of digital images,a method for removing high density salt and pepper noise based on morphological open and close operators is proposed. This method consists of the noise detection stage and adaptive morphological open-close filtering stage. A mark image is constructed in noise detection stage,and adaptive structuring elements are gained based on the mark image. The possible noise pixels are filtered by morphological open and close filtering,and the free-noise pixels are unchanged in morphological filtering stage. Simulation results show that the proposed method is effective to remove high density salt and pepper noise,and keeps the detail information well. The proposed method constructs adaptive structuring elements using noise detection,improves traditional morphology filtering ability,and the algorithm is simple,possesses the shorter processing time.
  • YE Wei,ZHAO Jianhui,ZHAO Yang,WANG Yong
    Computer Engineering. 2015, 41(2): 203-208. https://doi.org/10.3969/j.issn.1000-3428.2015.02.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Smoke detection plays an important role in early warning of fire, so one dynamic texture recognition algorithm is proposed in this paper. Firstly,the surfacelet transform is performed on image sequences. Then a generalized Gaussian model is built for the coefficients from Surfacelet transform. The obtained model parameters are regarded as feature vector, and finally the Kullback-Leibler ( KL ) distance is used as the similarity measurement method. In experiments,three kinds of Surfacelet based smoke detectionmethods,including the use of mean and variance as feature and SVM classifier for classification;the use of mean and variance as feature and Euclidean distance as the similarity measurement method;the use of generalized Gaussian model parameters as feature and Euclidean distance as the similarity measurement tool,are implemented and used for comparison. Experimental result shows that,compared with other smoke detection methods,the new algorithm has excellent performance and lower false detection rate.
  • LI Juncheng
    Computer Engineering. 2015, 41(2): 209-214. https://doi.org/10.3969/j.issn.1000-3428.2015.02.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The present fractional differential methods for image enhancement are mostly constructed based on 0 ~ 1 order fractional differential. There are rare papers discussing the image enhancement based on 1 ~ 2 order fractional differential. This paper analyses the effect of 1 ~ 2 order fractional differential to image enhancement,and constructs a mask operator for image enhancement based on 1 ~ 2 order fractional differential. Experimental results demonstrate that the presented operator not only has better image enhancement results than the commonly used frequency domain methods and spatial domain methods,but also has better image enhancement results than some present 0 ~ 1 order fractional differential operators.
  • ZHANG Ming,BAI Songhao,XING Shanshan
    Computer Engineering. 2015, 41(2): 215-218,223. https://doi.org/10.3969/j.issn.1000-3428.2015.02.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An Automatic Color Equalization(ACE) algorithm based on pyramid decomposition is proposed to solve the problems emerge after using traditional ACE,such as low speed,obscure details in dark region. The algorithm utilizes the logarithmic calculation to adjust the image obscure region and employs Gaussian kernel to build image pyramid. Color equalization algorithm is applied at the top level of the image pyramid and the result is iteratively refined layer by layer with comparison between few pixels. The simulation results show that the proposed algorithm can efficiently improve visual quality of the image,preserve the details,reduce the computational complexity and make it more suitable for practical applications.
  • CHEN Liwei,JIANG Yong
    Computer Engineering. 2015, 41(2): 219-223. https://doi.org/10.3969/j.issn.1000-3428.2015.02.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Evaluating the overall performance of the image fusion algorithm is an important part of the image fusion project. The existing evaluation indexes of the fused image often evaluate the quality of the fused image from different aspects and thus have some one-sidednesses when they are used to evaluate the performance of the the fusion algorithm; it is difficult to evaluate the overall performance of the fusion algorithm. This paper,by employing the multi-index decision technique,proposes a Weighted Total Score(WTS) index. This index can synthesize multiple evaluation values into a single value and measure the overall performance of image fusion algorithm. It also compares the evaluation results of WTS with those of Technique for Order Preference by Similarity to Ideal Solution(TOPSIS) and Rank-sum Ratio (RSR). Experimental results show that WTS index can achieve the same results as those of the subjective evaluation and has the overall evaluation ability close to the other two indexes. The proposed index can improve the reliability and the accuracy of the evaluation of the overall performance of the different fusion algorithms.
  • ZHAN Yi,LI Shengjie,LI Meng
    Computer Engineering. 2015, 41(2): 224-227,233. https://doi.org/10.3969/j.issn.1000-3428.2015.02.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The interpolated pixel can be expanded along the direction on which image grey is continuously varied by Tyler expansion. But the linear average of these Tyler expansions along the continuous and discontinuous direction will enlarge the width of image edges and introduce blurring and zigzag effects. In this paper,a neighborhood filtering for image interpolation is proposed. A weighted function based on grey distance adaptively selects the Tyler expansions in the neighborhood of interpolated pixel. The principle of the method is that the weight of the pixel which lies on the same lateral of image edges with the interpolated pixel is large,so its Tyler expansion is selected,vice versa,the weight is small and its Tyler expansion is not selected. It avoids the average of the expansions which cross the image edges,and can reduce width of edge and increase slope,which produce scrip edges.
  • LIU Songping,XIAO Degui
    Computer Engineering. 2015, 41(2): 228-233. https://doi.org/10.3969/j.issn.1000-3428.2015.02.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In three dimensional shadow rendering,the first segmentation area in Parallel Split Shadow Map(PSSM) algorithm is too small to result in aliasing phenomenon,and Variance Shadow Map(VSM) algorithm causes the serious phenomenon of irradiation. In view of the above,this paper proposes a hybrid algorithm based on PSSM and VSM. By setting the expanding coefficient to solve the first partition problem in PSSM algorithm,and adding fuzzy processing, repeated rendering transition region reduces boundary serration phenomenon. It uses MRT technology to reduce the irradiation induced VSM algorithm when rendering. Experimental results of the segmentation method,asymptotic test, texture size and hybrid algorithm of shadow map rendering effect and other aspects show quality of the shadow map is greatly improved compared with the PSSM algorithm,etc.
  • YU Ping,WANG Shitong
    Computer Engineering. 2015, 41(2): 234-241. https://doi.org/10.3969/j.issn.1000-3428.2015.02.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Classic Competitive Agglomeration(CA) algorithm fails to take into account of the information about a few samples which are known and important during the process of cluster. Moreover,competitive agglomeration chooses Euclidean distance as a similarity metric function. But,the distance is more applicable when the distribution of the data points is spherical. This restricts the scope of its application. In order to solve these problems,the semi-supervised entry with the ability to learn is introduced to enhance membership matrix. And a distance from a sample to the spaces is used to instead of Euclidean distance,that each of them is composed of one cluster center and its nearest points. A threshold parameter about similarity of cluster centers is introduced to algorithm as the judgment for merging. The semi-supervised spatial distance competitive agglomeration is proposed. Two sets of experiments using artificial image and real images are operated,and the results show that the proposed algorithm has greater ability to get right cluster number,and gets better clustering results.
  • LI Shunyi,HOU Jin,GAN Lingyun
    Computer Engineering. 2015, 41(2): 242-247. https://doi.org/10.3969/j.issn.1000-3428.2015.02.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem that the motion capture data has a large number of data redundancy,this paper proposes a key frame extraction method based on inter-frame pitch. In order to compress,storage,reconstruct and further reuse motion data,key-frame is needed to be extracted which represents the content of the motion. Quaternion is introduced to represent the difference between two rotations. The distance between two frames is defined by the total rotation differences and first frame is regarded as the first key frame. Then,calculate the difference between the current frame and the last key frame continuously. The frame is eliminated when the difference is smaller than the set threshold or the opposite is reserved for the new key frame. Spherical linear interpolation is used to reconstruct the sequence. To express the characteristics of human motion,the joint velocity is introduced. Reconstruction error is defined by the human body posture error of position and the motion speed error between the original frame and the reconstructed frame. Experimental resultdemonstrates that the original motion capture can be compressed in a high ratio and gives a good visual summary performance.
  • TANG Chaowei,ZHANG Xi,WANG Xuefeng,ZHOU Xu,SONG Junping
    Computer Engineering. 2015, 41(2): 248-252,257. https://doi.org/10.3969/j.issn.1000-3428.2015.02.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of the complexity of the network and terminal in heterogeneous environment,and the mutability of playback progress in the video-on-demand service that is caused by user random seeking,an Scalable Video Coding (SVC) fragment schedule algorithm based on user behavior characteristic is proposed. In the proposed algorithm,two types of windows are designed. One is playback window based on current playtime to ensure order data continues,the other one is anchor window designed with data prefetching,which is based on user random seeking following the Weibull distribution. The Layer-by-Layer(LL) schedule strategy is utilized in playback window and the first anchor window to ensure the timeliness of data,and the rarest-first strategy is used in the other anchor windows to balance the fragment distribution of the whole system. Simulation results in OverSim show that,compared with current LL schedule algorithm and weighted schedule algorithm,the proposed algorithm can improv the fragment scheduling performance,shortens the response time delay,reduces the server load,and improves the quality and fluency of the user in watching video.
  • LI Xuewen,PENG Yuecheng,HUAI Yongjian
    Computer Engineering. 2015, 41(2): 253-257. https://doi.org/10.3969/j.issn.1000-3428.2015.02.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Highly interactive visualization of 3D flower plants on mobile platform has become an important requirements of plant researchers,garden designers etc. However,simulations of specific deformation process of the whole plant or single blade under different touching intensity are not available via existing methods. Therefore,this paper proposes a visual simulation method of 3D flower plants touch feedback on mobile platform,presents analysis of the interaction requirements of whole plant or force conditions of single blade,and studies several methods for touch feedback simulation. In the simulation,the method is divided into two parts. The deformation of the whole plant is simulated according to dynamics theory,the deformation of single blade is calculated according to data from touch screen and rotation model of vein skeletons. Deformation processes under different touching intensity are simulated without inputting parameters manually. Simulation results show that this method can be applied on mobile platform,satisfactory realism and frame rates are obtained.
  • WANG Fengsui,WANG Guanling,QU Chengming,ZHAO Fa
    Computer Engineering. 2015, 41(2): 258-262,267. https://doi.org/10.3969/j.issn.1000-3428.2015.02.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to reduce greatly computational complexity in Multi-view Video Coding (MVC),an inter-view prediction and Direct mode early termination algorithm based on macroblock multi-correlation for multi-view video coding is proposed. The characteristics for time domain and inter-view domain prediction and the distribution for Direct mode in the Joint Multi-view Video Coding ( JMVC) are analyzed in the proposed algorithm. Comparing the ratedistortion cost between the time and inter-view domain determines whether the current macroblock predicted between inter views. Using the coding mode information of the previously encoded macroblock determines whether it skips Direct mode. Experimental results demonstrate that the proposed method is able to significantly reduce the computational load by 75. 62% on average,while keeping almost the same rate-distortion performance,compared with the full mode decision in JMVC.
  • PU Yinshu,YE Dejian,JIANG Xiuyan,LIU Xin
    Computer Engineering. 2015, 41(2): 263-267. https://doi.org/10.3969/j.issn.1000-3428.2015.02.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Telecom operators’ special video content distribution system may become the main-stream video content distribution system for smart TV. In order to ensure continuing development of telecom video content distribution system, this paper evaluates content distribution strategies from the perspective of cost. It proposes cost-related metrics and primitives,builds actual user behaviors model of Shanghai Telecom Interactive Personality TV ( IPTV) system,and implements a large-scale simulation system of telecom video content distribution system and conducts some simulation experiments based on actual user behaviors. Experimental results show the quantity relationships between metrics and related parameters,and get the optimized distribution strategy which minimizes the cost. This paper contributes to the telecom operators’ decision-making in choosing distribution strategies and setting parameters.
  • XIAO Yuandong,CAI Shengzhen
    Computer Engineering. 2015, 41(2): 268-271,277. https://doi.org/10.3969/j.issn.1000-3428.2015.02.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For large software and hardware resources required of traditional remote video surveillance system, independence between video transmission protocol and video capture device control protocol, high bandwidth requirements,large delay,and real-time differential,this paper uses Open Wireless Surveillance Protocol(OWSP) specific information structure,puts the video data transmission,device control,and other information in a packet,and based on the iOS mobile operating system and H. 264 video standard,designs and implements a set of video transmission and control in one of the mobile video monitoring software. Test results show that this monitoring software effectively improves the performance of mobile video surveillance system with fast control response, low transmission bandwidth, adaptive resolution,and real-time video play smooth.
  • HE Sheng,LUO Junyong,LIU Yan
    Computer Engineering. 2015, 41(2): 272-277. https://doi.org/10.3969/j.issn.1000-3428.2015.02.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Application protocol identification is widely applied in network security and the key problem of the protocol is how to discover the protocol feature. This paper proposes an efficient and precise method to automatically discover the protocol feature. The method takes advantage of the feature of protocol format to token the message,classify the messages according to the token sequence,and generally discriminate the protocol header length by change curve of classification number. Thus determine the scope of the word frequency statistics. The byte frequency of each data packet message header in data stream is counted and dealt under normalization. It gets the byte frequency vector of the protocol header,and utilizes the cosine similarity by calculating measured protocol and sample protocol to classify and identify the protocol. Experimental result shows that it has a high accuracy over 93. 5% using the signature extracted by this method.
  • WU Jun,SHEN Mengting
    Computer Engineering. 2015, 41(2): 278-281,286. https://doi.org/10.3969/j.issn.1000-3428.2015.02.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The traditional ultrasonic ranging method is based on time of flight using single frequency pulse. It is flexible in application but has low accuracy. Another high accurate method is based on multi-frequency continuous wave;however its application is limited by the continuous sound wave. In order to consider both measurement accuracy and application, this paper presents a novel method by using multi-frequency ultrasonic pulse. This method obtains accurate distance based on the sound velocity which is compensated by temperature and accurate time of flight estimated through the inherence time difference of different frequency pluses. Experimental results prove the measurement accuracy can achieve about 0. 3 mm for the distance up to 3 m.
  • LIU Weixu,LI Xiaoxia,TANG Zhifeng,LV Fuzai,SHEN Ruijun
    Computer Engineering. 2015, 41(2): 282-286. https://doi.org/10.3969/j.issn.1000-3428.2015.02.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the low signal to noise ratio,unconspicuous echo characteristic signal and other issues encountered in ultrasonic guided waves inspection of highway guardrail posts,an Improved Subspace Matching Pursuit(ISMP) algorithm is presented,which obtains the correlated atomic sets for each iteration from over-complete chirp atom library by the priori information of echo signal and achieves echo signal feature extraction by iterating to get the best time-frequency atom of the signal to be matched. The algorithm is verified by the guided wave inspection signal whose center frequency is 128 kHz. The results show that ISMP can effectively extract the characteristic atoms from echo signal, and the actual measurement error is less than 1% . It can meet the requirements of project detection.
  • YUAN Xiaofeng,GAO Deyuan,GAO Wu
    Computer Engineering. 2015, 41(2): 287-291,297. https://doi.org/10.3969/j.issn.1000-3428.2015.02.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Pico-satellite,which features low cost and flexiblity,is a hot topic in aerospace applications. Due to the limitation of size and weight,design of On-Board Computer(OBC) for pico-satellite systems is different with that in the larger satellites. This paper introduces the modular design techniques of an OBC based on Commercial Off-the-Shelf (COTS) devices for pico-satellites. The chip selection is important for electronic system design. The selection of the suitable COTS devices is performed by using Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS) method. Two versions of prototype OBC systems based on different types of low power MCUs which are selected by TOPSIS method are implemented. The running speed,power consumption,and operational temperature of two OBC systems are tested and compared. When operational temperature is 27℃ and main frequency is 8 MHz,the MIPS of OBC system based on MSP430 and ATxmega are 7. 449 and 6. 781,respectively. The running times of addition are 0. 613 μs and 1. 381 μs,respectively. The power consumptions of addition are 38. 224 mW and 54. 411 mW,respectively. The performance of the OBC system based on MSP430 is better. Two OBC systems can work correctly at -40℃ to 85℃ .
  • SHENG Chongchong,HU Xinming,LI Jiajia,WU Baifeng
    Computer Engineering. 2015, 41(2): 292-297. https://doi.org/10.3969/j.issn.1000-3428.2015.02.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The mainly used programming method for heterogeneous GPU cluster is hybrid MPI / CUDA or its simple deformation. However,because of its transparency to underlying architecture when using hybrid MPI / CUDA to write code for heterogeneous GPU cluster,programmers tend to need detailed knowledge of the hardware resources,which makes the program more complicated and less portable. This paper presents Distributed Parallel Programming Framework (DISPAR),a new programming framework for node-level heterogeneous GPU cluster based on data flow model. DISPAR framework contains two sub-systems,StreamCC and StreamMAP. StreamCC is a code conversion tool which coverts DISPAR code into hybrid MPI / CUDA code. StreamMAP is a run-time system which can detect heterogeneous computing resources and map the tasks to appropriate computing units automatically. Experimental results show that the methods can make efficient use of the computing resources and simplify the programming on heterogeneous GPU cluster. Besides,it has better portability and scalability as the code does not rely on the execution platform.
  • CHEN Chongchen,Rudolf Fleischer
    Computer Engineering. 2015, 41(2): 298-302. https://doi.org/10.3969/j.issn.1000-3428.2015.02.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Partitioning multi-colored point set into monochromatic parts is an optimization problem in computational geometry. It focuses on how to dissect the plan with polychrome points into regions with monochrome points. But the approach of partitioning with polygon cannot get good partition results now. This paper comes up with an approach of partitioning with straight line. This problem is discredited to prove that non-deterministic turing machine can decide this problem. It reduces Max2SAT problem to this problem in polynomial time,and proves that it is NP-hard. Then multicolored point set is partitioned into monochromatic parts problem with straight line in NP-complete class. It gives an approximation algorithm for the optimization version by using L-reduction from Setcover problem, and proves the approximation ratio is O(lgn).
  • WANG Yong,TANG Xiaohu,ZHANG Lijuan,YANG Ruiqin
    Computer Engineering. 2015, 41(2): 303-307. https://doi.org/10.3969/j.issn.1000-3428.2015.02.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the research of Radio Frequency Identification (RFID) system,when the number of unknown tags is estimated by using the traditional ALOHA algorithm,the large number of tags and the smaller initial frame length will cause large error. Using the initial fixed length of the frame,reader changes the response method to achieve an accurate tag number estimation. This paper studies a robust tag estimation method and the Prefix Randomized Query Tree(PRQT) algorithm,and then proposes Prefix Maximized Query Tree(PMQT) tag anti-collision protocol. The theoretic analysis shows that the system efficiency is more than 50% . The simulation result demonstrates that PMQT outperforms PRQT by about 18% ~30% with respect to the system efficiency. In addition,PMQT algorithm has tolerance to the inaccuracy of tag estimation.
  • LI Zhijian,XIAO Yilin
    Computer Engineering. 2015, 41(2): 308-312. https://doi.org/10.3969/j.issn.1000-3428.2015.02.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the identification efficiency and reduce the communication overhead,a novel algorithm,Binary Code Modulation Algorithm(BCMA) is proposed. BCMA works as follows:the activated tag generates and sends back a 2m bit binary code,Master Control Relay(MCR),on which the location information of the tag’s ID in the multi-branch tree is modulated. After receiving the MCR,the reader finds out the collided bits,demodulates the branch information,and groups the tags into determinate subsets. It is obvious that BCMA avoids generating idle slots. Analysis results and simulations show that compared with other existing multi-branch algorithms,as the common octree algorithm,BCMA improves the system throughput by 168% .
  • XU Zhenpeng,SHEN Hao,ZENG Weini
    Computer Engineering. 2015, 41(2): 313-316. https://doi.org/10.3969/j.issn.1000-3428.2015.02.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A failure detection architecture and algorithm based on clustering are proposed according to the topology of Ad Hoc networks. The active and the backup cluster manager are designated respectively. The exception detection function of the members is implemented by the selected redundancy cluster managers. The sending,monitoring,prediction and updating process of the heartbeat message are designed for fault detection. The updating method of the heartbeat prediction is added to fit the variable topology of Ad Hoc networks dynamically. Through the backup cluster manager and the exception data shared mechanisms among clusters,the system fault detection reliability is improved. The proposal is evaluated by the simulation. As a result,the proposed failure detection mechanism achieves a high accuracy,and is capable of the requirement of the top application design for the system reliability.
  • YUAN Zhongshu,LU Yang
    Computer Engineering. 2015, 41(2): 317-321. https://doi.org/10.3969/j.issn.1000-3428.2015.02.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Lightweight TCP / IP protocol stack (LwIP) is mainly used in resource-constrained embedded devices. In order to meet the real-time requirement of the embedded device,this paper analyzes the internal mechanism of LwIP, conducts a performance bottleneck analysis by experimental measurements,and designs the optimization program of LwIP. The main performance bottlenecks of LwIP are memory copy and verification process. Accordingly,the optimized algorithms of memory copy and checksum are presented. Additionally,in order to meet the higher priority requirement of the urgent data,this paper presents the management mechanism of the priority,and ensures that the emergency packets take precedence over ordinary data packets. Experimental tests are presented to prove that these optimization methods improve the real-time performance of LwIP.