Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 September 2014, Volume 40 Issue 9
    

  • Select all
    |
  • SONG Di,ZHANG Dong-bo,LIU Xia
    Computer Engineering. 2014, 40(9): 1-5. https://doi.org/10.3969/j.issn.1000-3428.2014.09.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Classic scratch detection usually uses variety edge operators. Because the edge detection alogorithms are sensitive to texture and noise,they often cause a lot of false positive results. In the case of the detection of metal surfaces, due to complex textures present in the surface of metal material,the false positive results are particularly serious. Here, based on bar pattern detection principle of Gabor filtering and combining with anisotropic texture suppression and hysteresis multi-threshold technology,a scratch detection method used for mobile phone accessories is proposed. First,the method extracts the scratches frame using Gabor filtering,and secondly,uses anisotropic texture suppression on the metal surfaces. Finally,it extracts scratches accurately with hysteresis multi-threshold technology. Experimental results show that the method can greatly suppress the texture of mental surface in background. At the same time,it extracts the complete scratch images. The false positive detection rate,false negative rate and probability of contour missing achieve 2% ,3. 7% and 5. 5% respectively,and the performance of the method is obviously superior to edge-based scratch detection methods.

  • FEI Xiong-wei,LI Ken-li,YANG Wang-dong
    Computer Engineering. 2014, 40(9): 6-12. https://doi.org/10.3969/j.issn.1000-3428.2014.09.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to enhance the efficiency of Advanced Encryption Standard ( AES ) and make use of general computing ability of Graphics Processing Unit (GPU),all the three versions of GPU parallel AES,namely 128 bit version,192 bit version and 256 bit version,are implemented on Compute Unified Device Architecture(CUDA). Then,it proposes optimization alogorithms of parallel AES with 3 versions. These alogorithms first consider threads amount in a block,shared memory size and total blocks,then use the experience data of optimal value of block size to guide AES alogorithm’s optimal block on GPU. Experimental results show that compared with unoptimized parral AES,these alogorithms can obtain encryption mean speedup by 5. 28% ,14. 55% and 12. 53% respectively on Nvidia Geforce G210 graphics card,while by 12. 48% ,15. 40% and 15. 84% on Nvidia Geforce GTX460 graphics card. In addition,these alogorithms are better at improving encrypting of Secure Socket Layer(SSL).

  • LI Yong-zhi,LI Guo-zheng,GAO Jian-yi,ZHANG Zhi-feng,FAN Quan-chun,XU Jia-tuo, BAI Gui-e,CHEN Kai-xian,SHI Hong-zhi,SUN Sheng,LIU Yu,CHEN Jia-chang,MI Tao, JIA Xin-hong,ZHAO Shuang,SHAO Feng-feng,LIU Jun-lian,GUO Yu-meng
    Computer Engineering. 2014, 40(9): 13-18,22. https://doi.org/10.3969/j.issn.1000-3428.2014.09.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to study that spacemen how to adapt to challenges which longtime closed environment brings to human health(physiology,psychology,spirit) and bodily function,a differentiation model based on multi-label learning is proposed. This paper adopts “ inspection,auscultation and olfaction, inquiry and pulse-taking ” diagnose methods of Traditional Chinese Medicine(TCM) to collect human life activity state data in longtime closed environment. It uses data mining methods to study and explain its characteristics and varying patterns. Average precision of classification model built by fusion data reaches 80% in the experiment.

  • FENG Zi-zhu,ZHAO Yi-qiang,LIU Chang-long
    Computer Engineering. 2014, 40(9): 19-22. https://doi.org/10.3969/j.issn.1000-3428.2014.09.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    With the widespread use of Intellectual Property ( IP ) in System-on-Chip ( SoC ) design, protection of hardware IP cores against piracy during evaluation becomes a major concern. Embedding a sequential hardware Trojan inside an IP is a new solution to protect the evaluation version of hardware IP. This paper proposes an advanced framework to lengthen the Trojan’s activation time which is the decisive factor of the expiry date of an IP. The sequential Trojan is inserted in the unused states of a Finite State Machine(FSM) in the target circuit and some rare nodes making up a sequence can be chosen as Trojan trigger conditions,and the normal function of the IP core is disturbed when the Trojan is activated. Simulation results demonstrate that the improved framework can effectively lengthen the activation time of the inserted Trojan by 120 times and simultaneously reduce the design overhead by 0. 123% when reasonably choosing the number of states as 3 and the length of sequence as 4.

  • YU Min-jie,YI Ping,GUAN Han-nan
    Computer Engineering. 2014, 40(9): 23-26,31. https://doi.org/10.3969/j.issn.1000-3428.2014.09.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The traditional GPS is hard to do localization in indoor environment due to the complicated walls and obstacles. The mainstream of localization alogorithm is conducted on horizontal direction,and method on vertical direction is still a new topic. This paper presents an on-demand indoor multi-storey localization alogorithm, which can be fingerprint-free and deployed rapidly and on-demand in a multi-storey building. On vertical direction,it proposes Multistorey Differential ( MSD) alogorithm,the main idea of which is to differentiate the RSSI from different floors to distinguish the exact floor of test point. It conducts both simulation and practical experiments to prove the accuracy of this alogorithm.

  • LI Zhong-wen,DENG Teng-bin,MA Shi-long
    Computer Engineering. 2014, 40(9): 27-31. https://doi.org/10.3969/j.issn.1000-3428.2014.09.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Time-series is a kind of important data object and is ubiquitous in the world. Due to its very large quality and complexity,data query and analysis base on the source data do pay high costs on time and memory of computer. A method for querying and displaying time-series data based on segmented extreme value is proposed. It segments the range of time to be queried and analyzed into periods of time,and then determines the number of access points in a period of time according to extreme value of each period of time and the total number of access points,accessing the points uniformly through a database query mechanism itself and combined with multi-threading mechanism to achieve parallel query and curve drawing of each time period data. Experimental results show that compared with traditional methods,the number of access points is able to be specified,and the drawn curve has a good approximation of the original curve in the case that the number of access points are determined. It is able to greatly shorten the curve querying and drawing time, with good engineering practicality.

  • WANG You-zhao,WEN Qi,HUANG Jing
    Computer Engineering. 2014, 40(9): 32-36. https://doi.org/10.3969/j.issn.1000-3428.2014.09.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The traditional method of parsing Substation Configuration Description Language ( SCL) files based on Document Object Model(DOM) expands the whole file in memory and makes a tree structure which has the defect of height memory utilization. According to the redundancy of text nodes information in SCL,improved algorithms are proposed by using the data structures of dynamic array,hash table and binary balance search tree to build index for the text nodes. Experimental results show that the DOM algorithm based on binary balance search tree can reduce 46% ~66% of the memory utilization for the common SCL files,and the DOM algorithm based on hash table can cut down 40% ~ 59. 8% of the bigger SCL files. The two improved algorithms all perform well in reducing the memory utilization of parsing SCL files on the premise of guarantee the SCL file parsing speed.
  • LI Guo-ding,FENG Zhi-yong,RAO Guo-zheng,WANG Xin
    Computer Engineering. 2014, 40(9): 37-41. https://doi.org/10.3969/j.issn.1000-3428.2014.09.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the advance of semantic Web,the Resource Description Framework(RDF) data published on the Web reaches the scale of ten billion triples,and it shows a geometric growth trend. Simple Protocol and RDF Query Language (SPARQL) query methods on stand-alone machine are no longer applicable. For this problem,this paper proposes a SPARQL Basic Graph Pattern(BGP) search algorithm based on Bulk Synchronous Parallel(BSP) model. According to the graph nature of RDF data and the definition of BGP,it divides the whole process into “ matching ” stage and “iteration” stage. First match each triple patterns and then iterate to get the query results eventually. It implements the algorithm by HAMA distributed computing framework. Experimental results show that it has higher query efficiency than SPARQL algorithm based on MapReduce,and it can support the SPARQL query of the large scale RDF data.
  • REN Bao-ning,LIANG Yong-quan,ZHAO Jian-li,LIAN Wen-juan,LI Yu-jun
    Computer Engineering. 2014, 40(9): 42-45. https://doi.org/10.3969/j.issn.1000-3428.2014.09.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For personalized movie recommendation domain,this paper proposes a user interest model based on dynamic update for multi-dimensional weight. It divides the movie into five dimensions of actor,director,categories,area and time,respectively to calculate the similarity among these dimensions of film. It uses the normalization method to change the similarity of film into multi dimension weight of the user interest model,and calculates the weights of features of each dimension in the application of TF-IDF algorithm,in order to achieve dynamic update of the film weight and dimensions of feature weight by using content-based recommendation algorithm. In the MovieLens data set for experiment,results show that,the model has higher recommendation accuracy rate and recall rate,and can find user preferences on the film dimensions,solve the problems of user interest drift.
  • GE Xing,SHEN Yao,XU Chang-liang
    Computer Engineering. 2014, 40(9): 46-50,58. https://doi.org/10.3969/j.issn.1000-3428.2014.09.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In routine massive data analysis queries,the CPU / IO intensive analysis queries are complex and timeconsuming, but share common components. It is challenging to detect,share and reuse the common components among thousands of SQL-like queries. Aiming at these problems, this paper proposes the signature-index approach and implements the LSShare system to solve the Multiple Query Optimization(MQO) problem in the cloud with a recurring query set. It generates signatures for each query based on Abstract Syntax Tree (AST). Then it makes a simple but efficient index for further identifying and sharing common components of multiple queries combined with SQL-rewriting techniques. LSShare system gradually optimizes regression query set in the cloud computing scene as the superposition of run number. Experimental results demonstrate, the system is superior to the traditional query optimization in share equally,and it can save nearly a third of the execution time.
  • ZHAI Hai-bo,ZHUANG Yi,HUO Ying
    Computer Engineering. 2014, 40(9): 51-54,65. https://doi.org/10.3969/j.issn.1000-3428.2014.09.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The SDPBloom automatic discovery algorithm can not judge Quality of Service (QoS) compatibility of endpoints in the participants discovery phase in advance,and it makes probably a large number of QoS incompatible endpoints information on the each node and the network,which consumes too much memory and network resources. To solve this problem,this paper proposes an automatic discovery algorithm based on Service Ability Vector(SAV),which can judge whether the topic name and type of endpoints are matched and QoS compatibility by the Bloom Filter Vector (BFV) and SAV to reduce unnecessary information transmission and storage. Experimental results show that the algorithm has lower memory resource and network transmission consumption than SDP_ADA algorithm and SDPBloom algorithm.
  • QIAN Guang-ming
    Computer Engineering. 2014, 40(9): 55-58. https://doi.org/10.3969/j.issn.1000-3428.2014.09.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In a real-time system scheduled with the Earliest Deadline First (EDF) algorithm,if the request of new tasks’ insertion and / or current tasks’ acceleration occurs and the remaining bandwidth of the system is not enough for this request,then part of the bandwidth has to be freed and the system will change its running mode. Aiming at this problem,this paper discusses the influences on the schedulability by the mode change based on the analysis on the dynamic processes of the insertion of a new task and / or the acceleration of a current task. With the processor demand criterion,it proves that deadline missing is possible only before a time point. Hence the length of the transition can be reasonably defined,and it results a clean research model of three stages. Illustrative examples are also given.
  • HUANG He-jie,KANG Fei,SHU Hui
    Computer Engineering. 2014, 40(9): 59-65. https://doi.org/10.3969/j.issn.1000-3428.2014.09.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional reverse analysis methods are not very effective in the analysis of the algorithms protected by virtual machine because of virtualization technology and code obfuscation technology. Aiming at this problem,this paper presents a virtual machine protection reverse engineering technique based on dataflow analysis. It uses Pin platform to record the data flow information during the execution of the protected algorithms dynamically, analyses the record information,restores the track of the virtual machine instructions and the control flow graph of the protected algorithms, gets data generation process hierarchically by using the track information. Then the analyzer uses those information to reconstruct the protected algorithms. Experimental results show that the proposed method can correctly restore the program control flow and data generation process,and assist the analyzer to reconstruct the protected algorithms.
  • ZHAO Liang-zhen,WANG Bo-xing
    Computer Engineering. 2014, 40(9): 66-70. https://doi.org/10.3969/j.issn.1000-3428.2014.09.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the multidisciplinary collaborative simulation platform, simulation software has a large variety and heterogeneity. In order to solve the difficulty of data exchange between different types of simulation components,this paper presents the wrapping technology of consistent access to the simulation components. It describes the composition and packaging object of the simulation component, and researches the key wrapping technologies, including simulation component wrapping mechanism,data variable wrapping,mapping and transmitting wrapped variable. Finally,with the integration of component wrapping tools and collaborative simulation platform,it verifies the feasibility of simulation component wrapping technology. Combining with specific examples of wrapping simulation component,it shows that wrapping technology can improve management and reuse of model and data,and reduce the management difficulty of complex simulation process.
  • ZHANG Ying,WU He-sheng
    Computer Engineering. 2014, 40(9): 71-76. https://doi.org/10.3969/j.issn.1000-3428.2014.09.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Hash algorithm plays a key role in high performance multi-process load balancing. The study of Hash algorithm for multi-process load balancing is mainly concentrated on the design and application of Hash algorithm,yet analysis and comparative study for the performance of the existing Hash algorithm are fewer. So this paper summarizes the common features that Hash algorithm for multi-process load balancing should have, and screens five major Hash algorithms applied in multi-process load balancing. Theoretical analysis and experimental evaluation about balanced allocation and time-consuming of Hash algorithm provides a foundation for selecting Hash algorithm for multi-process load balancing,and shows that Toeplitz Hash is the best one.
  • CUI Jing-song,HE Song,GUO Chi,HE Hui-lin
    Computer Engineering. 2014, 40(9): 77-81. https://doi.org/10.3969/j.issn.1000-3428.2014.09.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cloud management platform and end-users of virtual machine generally use proxy software or plug-ins to communicate with each other,but the convenience and anti-jamming capabilities of this method is poor. Aiming at this problem,this paper proposes a design scheme for transparent message channel between the node provided by cloud service (the physical host that virtual machine lies in) and the end-user of virtual machine based on virtual desktop on Kernelbased Virtual Machine(KVM). It builds message control terminal in cloud management platform to receive and process messages that service node sends to end-users of virtual machine,and transforms the messages into images,reads the content of images into specified file in the form of bitmap pixel data as the source of message sending module. It adds message sending module and feedback receiving module into source codes by modifying the source code of Virtual Network Computing ( VNC ) server-side integrated by Qemu-KVM of KVM virtualization platform, integrates the messages into desktop image of the virtual machine,and processes the feedback from the remote terminal of VCN client. Then it builds a two-way interactive message channel between the cloud platform and the end-users which is transparent for virtual machine itself. Experimental result verifies that this scheme is feasible.
  • WANG Ding,WANG Shan-shan,XI Xiao-yu
    Computer Engineering. 2014, 40(9): 82-87. https://doi.org/10.3969/j.issn.1000-3428.2014.09.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the shortcomings that signal path protocol has high end-to-end delay and packet loss rate in highspeed environments, this paper modifies the Dynamic Source Routing ( DSR ) protocol, and by using the HELLO message,the number of neighbors can be obtained. According to the number of neighbors,it can calculate the neighbor change ratio. During the routing discovery,it can calculate the length of the routing by using the method of routing distance and routing hops combination,and choose the neighbor node whose neighbor change ratio and route length are lower join the routing. So it can choose the high degree of stability of routing. Simulation results show that under the high-speed environment the algorithm can control the end to end delay of data packet transmission,dramatically increase the successful package delivery ratio and reduce routing overhead.
  • CAO Yi-ning,XIE Yong-qiang,XU Bo,WANG Jing-jun
    Computer Engineering. 2014, 40(9): 87-91. https://doi.org/10.3969/j.issn.1000-3428.2014.09.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the multi-layer control problem in IP / WDM network causing by the independent control of IP layer and optical layer,a cognitive-based cross-layer control system is proposed. The proposed system builds a new cross-layer controller on the original IP layer and optical layer control plane with overlay network design ideas. The controller performs traffic recovery and resource scheduling dynamically in case of network failure. By the enhancement of the information sharing,like traffic features,the usage of bandwidth,the state of network failures,the new system achieves a state-based recovery capability,and solves the problem of resource preemption and recovery concussion. Performance evaluation shows that this system accelerates recovery speed with limited signaling cost,and achieves better network survivability.
  • LIU Jia,GUO Ai-huang
    Computer Engineering. 2014, 40(9): 92-95,105. https://doi.org/10.3969/j.issn.1000-3428.2014.09.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to meet the system performance gain,CoMP system requires users to feedback a lot of the channel information. However,the energy consumption of the system also increases with the enhancement of feedback overhead.Feedback alogorithm based on dynamic SNR threshold is proposed in this paper,starting from the view of the green wireless system,achieves a good compromise between system performance and energy consumption under the premise of minimizing the performance loss to the system by reducing unnecessary feedback information, thereby reducing unnecessary cooperative interactive information between nodes, thus improve the energy efficiency of the system. Simulation results show that the proposed alogorithm can effectively reduce the feedback overhead of system compared with the fixed threshold SNR feedback alogorithm and situation which does not use selective feedback,under the system performance requirements.
  • ZHANG Yu-fang,CHEN Guang-li,XIONG Zhong-yang,YAN De-han
    Computer Engineering. 2014, 40(9): 96-101. https://doi.org/10.3969/j.issn.1000-3428.2014.09.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the existing problems in the current network infrastructure,which have complex design,limited port expansion and high maintenance costs. This paper proposes a longitudinal heterogeneous technology scheme based on Intelligent Resilient Framework(IRF) virtualization technology. This scheme virtualizes several low-end switches and IRF system into a single logical device. On one hand,the numbers of ports of the logic device are increased,meanwhile it is sampler to manage for big switching network. On the other hand,the scheme reduces the cost of building big switching network. In the plane of network topology,multiple longitudinal links between low-end switch and IRF system are bound to a logic link by LACP technology,and the traffic can be balanced sharing among them. In the plane of protocol control, the system control and management pane of the low-end switch is shifted up,and it is centrally controlled and decided by IRF system. Finally,the paper analyzes the reliability,traffic load balancing,expansion and maintenance with the existing schemes. Experimental results show the scheme can solve the shortcomings of the existing schemes,and enhance the port density with lower costs.
  • WANG Yi-bin,NI Wei-ming
    Computer Engineering. 2014, 40(9): 102-105. https://doi.org/10.3969/j.issn.1000-3428.2014.09.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Cognitive Radio(CR) networks,under the premise of cognitive users satisfying the target SINR,in order to reduce each user’s transmit power,non-cooperative game can be used. This paper studies joint control of power and rate for CR in the underlay type. It assumes the problem between Secondary Users(SUs) as a Non-cooperative Game (NG),and one joint control of power and rate alogorithm which is based on Delay Cost(DC) of SUs is proposed. It subjects to the tolerable interference limits,and proves the existence and uniqueness of the proposed alogorithm’s Nash Equilibrium(NE). Simulation results show that the alogorithm can attain higher utility and lower transmission delay using lower power,and sum interference made by the SUs can not exceed interference threshold.
  • TAO Zhi-yong,WANG Ru-long,ZHANG Jin
    Computer Engineering. 2014, 40(9): 106-110. https://doi.org/10.3969/j.issn.1000-3428.2014.09.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When Virtual Private Network(VPN) constructed by Multiprotocol Label Switching(MPLS) and Border Gateway Protocol(BGP) is used in crossing domain platform,the label path can not be found and boundary devices are overloaded. Aiming at these problems,this paper analyses the fundamental reasons on the basis of label distribution and data forwarding plane. It brings forward two schemes,namely back to back project and single hop scheme based on EBGP to solve the problem of label distribution. However,the problem of overloading remains to be unsolved. Then this paper further proposes load-separation VPN solution based on multi-hop EBGP. In detail,the scheme improves the traditional VPN scheme by extending the existing protocols and modifying the systematic framework and brings forward specific configuration according to different network environment. This paper evaluates the performance of the schemes by comparing the different solutions through many indexes and the results indicate that load-separation VPN scheme is effective in realizing crossing-domain VPN and can solve both label distribution and overloading.
  • XIE Huang,ZHANG Yu,WANG Yun-kai
    Computer Engineering. 2014, 40(9): 111-116,123. https://doi.org/10.3969/j.issn.1000-3428.2014.09.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the lack of global governance mechanisms in the unstructured Peer-to-Peer(P2P) network,network nodes do not know the entire network topology and target data location information. So the query message routing process has a high randomness,not only query performance is low,but also bandwidth consumption is large. Based upon the analysis of two typical categories of unstructured P2P routing alogorithms,this paper proposes a node-based Mixed Query Routing(MQR) alogorithm to deal with the scale problem of redundant messages and to improve the search scope of data. By means of the status information about the nodes and the TTL values of the queries,it can improve the search performance both in the aspect of data’s search scope and network efficiency. Simulation experimental results show that compared with the typical alogorithms APS and Random Walk,the MQR alogorithm can reach higher accuracy rate,better network efficiency and recall rate.
  • ZHANG Cheng,YANG Dong-feng,HUANG Xie,ZHANG Gen-yao
    Computer Engineering. 2014, 40(9): 117-123. https://doi.org/10.3969/j.issn.1000-3428.2014.09.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The existing content cache algorithm of Content Delivery Network(CDN) leads to the expansion of routing table with the network increasing,which will impair the routing efficiency and network performance. Therefore,based on related contents attracting,a related contents attracting algorithm is proposed. With the effect of attracting similar contents cached in other near nodes,for the purpose of apparently stable featured contents of nodes cached,the algorithm attracts major characteristic contents,rejects secondary feature contents,and enlarges the difference of different characteristic content. It also gathers the related contents on the same nodes via the mutual attraction with same contents feature,which facilitates the cache contents feature abstraction. Meanwhile,the strategy of lifetime increasement between contents with main feature is designed to deduce the routing advertisement and improve the routing scalability. Experimental results show that the proposed algorithm can reduce the update frequency of cache content,and improve the routing reliability.
  • TIAN Xue-ying,LIU Yan-heng,SUN Xin,WANG Ya-zhou,LIN Jia-jia
    Computer Engineering. 2014, 40(9): 124-129,142. https://doi.org/10.3969/j.issn.1000-3428.2014.09.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A topological model that can describe the mobile social network accurately is proposed based on four initial networks considering the dynamic of social network,the different importance of users and the direction of information interaction. Random walking theory and improved PageRank algorithm are adopted,and transition probability is introduced to associate the network topological structure between two time-steps. Firstly,PageRank algorithm is used to obtain the strength of the nodes in order to get the probability transition matrix. Then random walking theory is used to get the current time-step edge existence probability matrix based on the last time-step edge existence probability matrix and the probability transition matrix. During each time-step,a node is added and it is checked if there is any departure node. Finally,simulation model is used to simulate the four initial networks in in-degree,out-degree,strength distribution and the correlation between degree and strength. The results indicate that the four initial networks’ in-degree,out-degree,strength distribution and the correlation between degree and strength show obvious power-law character. It shows that the random walking theory and improved PageRank algorithm can describe the mobile social network better,which is of certain practical significance.
  • LIU Chao,LI Jun-zhan,HUANG Wei
    Computer Engineering. 2014, 40(9): 130-133,148. https://doi.org/10.3969/j.issn.1000-3428.2014.09.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the adaptability and the communicative efficiency of CAN control network,the ProCAN in the field of process industry is designed,which is based on the technical specification of CAN2. 0A and the features of machining process control network. This paper analyzes the features of machining process control system structure model, and proposes the types of communication messages,defines the coding of the standard data frame arbitration field and data field. Meanwhile,the communication model,the control of communication state network,the abnormal communication and the long packets retransmission of error frames for ProCAN protocol are discussed. By using OPNET network simulation software,simulation experiments show that the ProCAN protocol is able to achieve message communication with characteristics of low time delay,strong real-time capacity,high reliability,especially when the network is overload.
  • HUANG Yun-ting,JIANG Nan,DU Cheng-lie
    Computer Engineering. 2014, 40(9): 134-137,154. https://doi.org/10.3969/j.issn.1000-3428.2014.09.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the real-time performance of a heterogeneous communication system,this paper proposes a cross-platform Real-time TCP / IP(RTTCP / IP) protocol stack. It introduces an OS independent layer in RTTCP / IP to shield the difference of system-level data processing to provide a good portability and extensibility. It simplifies the standard TCP / IP in RTTCP / IP so that less system resources are demanded,which makes RTTCP / IP a light-weight protocol stack. It avoids duplicating the data packets while delivering the data within the RTTCP / IP protocol stack,and it attempts to adopt a TDMA MAC in the RTTCP / IP protocol stack to avoid communication collisions. Besides, to guarantee the emergency data can be processed in the foreseeable period of time,it introduces a priority mechanism to tackle the thread,or packet,priority reversing problem. Test results show that the RTTCP / IP implementation method can reduce the system overhead and communication delay,and improve system real-time performance and stability.
  • QI Xiang-ming,SHI Shuang-yu,YANG Xiao-tao
    Computer Engineering. 2014, 40(9): 138-142. https://doi.org/10.3969/j.issn.1000-3428.2014.09.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The general three-dimensional grid watermarking algorithm can not take into account the embedding capacity and transparency,and watermark blind detection is not easy to achieve. Aiming at these problems,this paper presents a blind watermarking algorithm based on feature points. It uses a global three-dimensional model to find the farthest points of global feature points. According to the affine invariance principle,the original carrier is affined to a fixed dome space of the global coordinate system to enhance the robustness. According to the point-intensive,it divides the carrier into some local space. In order to enhance the transparency of the algorithm,it makes the farthest point from the centroid in the local space as the local feature vertexes to establish local geometric coordinates. It achieves blind watermark by using the projection of the vertex angle defined in the coordinate system to store the watermark indexes. Experimental result shows that the proposed algorithm not only has good robustness and imperceptibility to such as geometric transformation,simplification,random noise and shear attacks,but also has blind watermark detection advantage.
  • REN Zhi-yu,CHEN Xing-yuan,MA Jun-qiang
    Computer Engineering. 2014, 40(9): 143-148. https://doi.org/10.3969/j.issn.1000-3428.2014.09.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    By introducing attributes to provide richer semantics for Role-based Access Control(RBAC) management policy,the attribute-based role assignment model is proposed. It is formalized by description logic,including concepts and relations. In order to resolve the difficulty of privilge management policy detection in distributed environment,the userrole reachability analysis problem is defined and analyzed. The inference rules are described by SWRL,and imported into the inference engine to realize automated reasoning. Application example shows that the reasoning method is correct and feasible. Experimental result shows that the reasoning time rises slowly by the count of policy. So the reasoning method is practical for the automatic policy detection. It can avoid potential security problems,and offer a basis for the safe application of the privilge management model.
  • XU Chan,LIU Xin,WU Jian,OUYANG Bo-yu
    Computer Engineering. 2014, 40(9): 149-154. https://doi.org/10.3969/j.issn.1000-3428.2014.09.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In current,since judging the malware in information security area in China has relatively low intelligence,this paper analyzes a large number of malicious softwares,and extracts the typical characteristics of dangerous behavior,then integrates these acts and builds a mapping library for these behaviors,which is used for transfering the behavior into data. It also designs an algorithm to make the data can directly be used for training. Through myriads of experiments,a BP neural network suitable for training type is designed,and each operator and parameter are determined. By training the neural network,this paper establishes a system to judge whether the suspicious one is a malware. Experimental result shows that this idea is right,and the false alarm rate and false negative rate are 1% and 3. 7% .
  • SHANG Xue-jiao,DU Wei-zhang
    Computer Engineering. 2014, 40(9): 155-158,166. https://doi.org/10.3969/j.issn.1000-3428.2014.09.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problems that the previous publicly verifiable multi-secret sharing schemes can be constructed only by Lagrange interpolation polynomial and the shared secret is limited to the finite field or additive group,a publicly verifiable multi-secret sharing scheme based on bilinear pairings is proposed. In the scheme,each participant has to hold two shares for reconstructing multiple secrets,and the verification information is generated in the process of secret distribution. According to public verification information,anyone can verify the validity of secret shares. Cheating of dealer and participants can be detected in time. In the secret reconstructing process,Hermite interpolation theorem is used to reconstruct the secret polynomial,and bilinear operation is combined to reconstruct the secret. Under the assumptions of Bilinear Diffie-Hellman Problem(BDHP),the analysis result shows that this scheme can resist internal and external attacks and is a secure and efficient multi-secret sharing scheme.
  • WANG Qun,ZHAO Guang-song,XU Bo
    Computer Engineering. 2014, 40(9): 159-166. https://doi.org/10.3969/j.issn.1000-3428.2014.09.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The nodes in the delay tolerant networks rely on the cooperation among nodes to forward the messages to destinations. How to effectively enhance the cooperation among nodes in Delay Tolerant Network(DTN) is a challenge, and the Blackhole attack is a typical incorporation behavior. In order to detect and restrain the Blackhole attack,a Blackhole nodes detection mechanism based on attack features is proposed,which extracts three essential features of the Blackhole attack:the forged high relay capacity,the imbalance of the number of messages forwarded between nodes and high message loss rate. The mechanism exploits the local vote and the cooperative detection among nodes to determine the probability that a node is a blackhole node. Simulations results show that the blackhole nodes detection mechanism based on attack features named AFD-Prophet can improve the delivery rate while not increasing the delivery delay in comparison with the reputation-based detection protocol T-Prophet.
  • FANG Meng-meng,HE Jia-ming,SHI Zhi-hui
    Computer Engineering. 2014, 40(9): 167-169,173. https://doi.org/10.3969/j.issn.1000-3428.2014.09.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The commonly-used methods of imperceptibility evaluation for information hiding algorithms can not express the subjective evaluation comprehensively and evaluate the performance of the algorithms accurately. To solve this problem,a new method based on visual masking is proposed. It combines the linear relationship between the perceptual quality of image and its Mean Square Error(MSE),and the perceptual properties of texture and luminance,with the aim of improving the perceptual quality. In this method,the linear expression between the perceptual quality and MSE is given,among which the luminance and the gradient can be used to calculate the luminance and the texture weight coefficients respectively. By weighting the expression linearly,a global performance of imperceptibility of the stego image is obtained. Experimental results demonstrate that the proposal outperforms the conventional Peak Signal to Noise Ratio (PSNR) in higher efficiency and realizing the consistency of image quality of subjective and objective evaluation.
  • ZHANG Rui-li,LI Shun-dong
    Computer Engineering. 2014, 40(9): 170-173. https://doi.org/10.3969/j.issn.1000-3428.2014.09.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In existed electronic consumption system,multi-consumption often occurs and causes low efficiency even cash disorder. For tackling this trouble,this paper proposes a new efficient and safe electronic consumption scheme. The scheme is based on bilinear pairings and linkable ring signature. Linkable ring signature has the efficiency and the security of ring signature. The scheme can check whether the signature is correct or not. It can detect the same user uses limited amount of cash repeatedly,by the three stages of designing,consuming and saving,connects the transaction of consumers, businesses and banks each other, and makes electronic consumption go on. The scheme can preserve the user ’ s anonymity,prevent multiple consuming,and satisfy the basic requirement of electronic consuming. Analysis and proof show that the scheme is more feasible and efficient than the scheme proposed by Liu,etc. (Wuhan University Journal of Natural Sciences,2013,No. 2).
  • YANG Cheng,WANG Yun-kai,HONG Rui-long
    Computer Engineering. 2014, 40(9): 174-177,182. https://doi.org/10.3969/j.issn.1000-3428.2014.09.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper studies the cultural features of Chinese Internet users password by studying the relationship of 26 letters from Internet users password characters with Pinyin. It does the frequency statistics based on the network of Chinese phonetic alphabet frequency and polyphone treatment method. After briefly general statistical characteristic of password,it is focus on analysis of the similarity between Internet users’ password and the frequency of letters in the Pinyin text and English text in western countries and in China. It reveals that the password of Chinese Internet users design is closely related to Pinyin,and accustomed to using phrases mnemonic phrase-based passwords like Pinyin.
  • QU Chang-bo,YANG Xiao-tao,YUAN Duo-ning
    Computer Engineering. 2014, 40(9): 178-182. https://doi.org/10.3969/j.issn.1000-3428.2014.09.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional zero watermarking algorithm constructs the watermark which is often not meaningful binary sequence. This leads to that the copyright identification is not intuitive and fast. Referencing Robust watermarking algorithm based on visual code,combining with zero watermark thoughts,this paper puts forward a kind of meaningful zero watermarking algorithm based on visual code. To provide customers with a meaningful zero-watermarking embedding watermarks as copyright image to go. It uses balanced multi-wavelets transform to get the actual carrier. And does the actual carrier image block Singular Value Decomposition ( SVD) and calculates balanced factor to get the difference matrix,the matrix are generated by the difference of the transition matrix. The transition matrix combines 2 ×2 parts by visual secret algorithm to generate the image feature information. The algorithm uses the image feature information and customer watermark information to generate zero-watermark. Experimental results show that the algorithm has good security and robustness. It is a reliable image copyright authentication zero-watermarking algorithm.
  • WANG Jian-fei,MA De,XIONG Dong-liang,CHEN Liang,HUANG Kai,GE Hai-tong
    Computer Engineering. 2014, 40(9): 183-189. https://doi.org/10.3969/j.issn.1000-3428.2014.09.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the efficiency of System-on-Chip(SoC) integration and verification for different applications of information security,a complete and pre-verified encryption and decryption subsystem based on embedded CPU is proposed. The subsystem includes cryptography modules such as RSA,DES,AES and so on. It can satisfy applications of different requirements on security levels. The embedded CPU in subsystem is a low-power and high-performance CPU,as a coprocessor for main CPU in SoC. It is responsible for controlling the operation of cryptography modules, reducing both the computation load of the main CPU and the power of SoC greatly. Integrating the pre-verified encryption and decryption subsystem as a whole to SoC,significantly reduces SoC design and integration effort and lowers the difficulty of SoC verification. Using gated clock technology, which manages the clock of cryptography modules based on their states,reduces the power of subsystem effectively. According to the CKSoC Integration method, the subsystem based on embedded CPU in different hardware configuration can be implemented quickly in the SoC integrator. Experimental results show that SoC design and verification work of constructing subsystem are reduced,and it improves work efficiency.
  • ZHANG Liang-liang,FENG Jing,HU Gu-yu
    Computer Engineering. 2014, 40(9): 190-195. https://doi.org/10.3969/j.issn.1000-3428.2014.09.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional community detection methods can only process small-scale low-dimensional social network data because of its complexity. Aiming at this problem,this paper proposes a deep learning method for community structure based on compressive sensing. With the great advantages of reducing the feature dimension of the social network through random measurement matrix, it uses Deep Belief Network ( DBN) to learn unsupervised from the low-dimensional samples. The model is fine-tuned by supervised learning from a small scale sample sets with class labels. Experimental results show that random measurement method has a good effect of dimensionality reduction for sparse features,and DBN performs well in processing large data. It is shown to be advantageous over other community detection methods on largescale high-dimensional actual social network data.
  • XUE Yan-xue,XUE Meng,LIU Yi-jie,BAI Xiao-hui
    Computer Engineering. 2014, 40(9): 196-199. https://doi.org/10.3969/j.issn.1000-3428.2014.09.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A palmprint recognition method which can solve the small sample size problem of bidirectional Principal Component Analysis(PCA) is presented. The implementation procedure of this method is as follows:Each image is obtained by 2DGabor wavelet transform of palmprint Region of Interest(ROI) image as an independent sample,in order to increase the number of the samples of every kind palmprint. This paper designs an improved algorithm based on samples scatter matrix to extract the palmprint features. This algorithm can obtain the best projection matrix by adopting the k values matrix instead of the average values matrix of training samples. The 2DGabor and the improved BDPCA algorithm are combined to identify every palmprint. Experimental results on the PolyU palmprint database demonstrate that the proposed method not only reduces the influence of different training samples on recognition rate,but also increases the rate,especially it has great performance when the number of training samples is 1. The method effectively solves the small sample size problem of palmprint recognition.
  • HU Xiao-dong,WU Yao-yao,CHEN Jin-ping,ZOU Jing
    Computer Engineering. 2014, 40(9): 200-203. https://doi.org/10.3969/j.issn.1000-3428.2014.09.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An extraction method is proposed aiming at artifacts caused by scintillation defects in X-ray projection images. Defective pixels are extracted by extracting the local abnormal points. The normal images are transformed into polar coordinate,and the local abnormal points are extracted to get the mask images with marked defective pixels.Objective data and subjective visual comparison of simulation images validate the effectiveness of the extraction algorithm. The process includes the spreading process along the isolux line and the anisotropic diffusion process. The extraction method is applied. BSCB algorithm is restored into different magnifications of X-ray projection images. Experimental results show that the images quality is improved obviously.
  • SHAO Feng-xian,LI Feng,ZHOU Shu-ren
    Computer Engineering. 2014, 40(9): 204-209. https://doi.org/10.3969/j.issn.1000-3428.2014.09.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a two-dimensional discrete wavelet transform Haar Local Binary Pattern (LBP) with Local Gradient Pattern ( LGP ) feature fusion method Haar-LL. The image of the two-dimensional discrete wavelet transform Haar,to thereby obtains four different frequency sub-images,and extracts the low frequency part of the LBP feature,three high-frequency sub-images of the LGP feature extraction,and takes the three characteristics of LGP parallel fusion and LBP features for serial fusion. Under the Matlab environment using Support Vector Machine(SVM) on the INRIA data set for five experimental groups INRIA dataset experiments carried out on five groups,respectively,with Histograms of Oriented Gradients(HOG),Pyramid of Histograms of Orientation Gradients(PHOG),LBP,LGP detection rate,detection time,light and noise robustness contrast. Comprehensive various experimental data show that the robustness of illumination and noise is better.
  • YANG Xian-feng,YANG Yan
    Computer Engineering. 2014, 40(9): 210-214. https://doi.org/10.3969/j.issn.1000-3428.2014.09.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the feature erroneous inspection that consists in vehicle detection,this paper proposes a vehicle detection method based on the fusion shape and texture characteristics in analysis of the error reason. It calculates the Histogram of Oriented Gradient (HOG) feature and the unified Local Binary Pattern (LBP) operator for all cell in detection window,solves the problem of high dimension characteristic and redundancy by Principal Component Analysis (PCA) in the browser window and texture characteristics-forming process. Combined with the Support Vector Machine (SVM),it does the feature training and test experiment. Experimental results show that this method juggles both sides of the shape and texture characteristics in vehicle image effectively,significantly reduces the error probability of the vehicle detection when meets the detection speed,gets good effect both in efficiency and accuracy.
  • HUANG Chong-qing,XU Zhe-zhuang,HUANG Yan-wei,LAI Da-hu
    Computer Engineering. 2014, 40(9): 215-219,224. https://doi.org/10.3969/j.issn.1000-3428.2014.09.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The number of hidden nodes is a critical factor for the generalization of Extreme Learning Machine(ELM). There exists complex optimization process,over learning or traps in local optimum in traditional algorithm of calculating the number of hidden layer of ELM. Aiming at the problems,Structural Risk Minimization(SRM)-ELM is proposed. Combining empirical risk with VC confidence,this paper proposes a novel algorithm to automatically obtain the best one to guarantee good generalization. On this basis,the Particle Swarm Optimization(PSO)position value is directly treated as ELM hidden layer nodes,which employs the PSO in the optimizing process with Structural Risk Minimization(SRM) principle. The optimal number of hidden nodes is reasonable correspond to 6 cases. Simulation results show that the algorithm can obtain the extreme learning machine optimal nodes and better generalization ability.
  • GAO Chang-yuan,WANG Ting-ting,LI Yan-lai,PENG Ding-hong
    Computer Engineering. 2014, 40(9): 220-224. https://doi.org/10.3969/j.issn.1000-3428.2014.09.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that different types of attribute value can not keep the uncertain information effectively in the hybrid decision,according to the principle of uncertainty,this paper introduces the Determination-Uncertain(D-U) space theory of connection number into the hybrid multiple attribute decision making problems,and a new method of hybrid multiple attribute decision making is proposed. It treats determine and uncertain as a whole,puts forward the rules of different types of attribute values mapping to D-U space,makes the different types of attribute values in the space unify quantification and defines uncertain information of attributes,to avoid decision-making biases caused by the loss the uncertain information. In decision-making process,it chooses alternatives by calculating the value of norm and argument of attribute vector in the space to describe the stability of all the alternatives intuitively and makes sorting criterion have an intuitive sense. Finally,the suitability and practicability of the method are proved through two examples.
  • CAO Qian-xia,LUO Da-yong,WANG Zheng-wu
    Computer Engineering. 2014, 40(9): 225-228,232. https://doi.org/10.3969/j.issn.1000-3428.2014.09.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Background estimation is an important preparatory work for moving object detection. In complex scenes,such as urban traffic,the background model is easily contaminated by a number of slow-moving or temporarily stopped moving object,and many subsequent processing steps or higher computational cost algorithms are needed to detect the foreground. To solve this problem,this paper proposes a background estimation algorithm based on the improved Sigma-Delta filtering,which is intended to achieve a more stable background model by combining a selective background updating mechanism with multiple-frequency Sigma-Delta background estimation method to deal with different object motion characteristics in complex scenes. The results of comparative experiment on complex traffic scenes sequences of typical urban road and intersection show that the proposed algorithm achieves better detection effects with keeping Sigma-Delta filtering high efficiency and low consumption performance.
  • LV Rui,LI Ming,WANG Ming-kuo,LIU Huan-huan,XUE Jing-yuan
    Computer Engineering. 2014, 40(9): 229-232. https://doi.org/10.3969/j.issn.1000-3428.2014.09.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Point(ICP),which exits error accumulation and can not meet the demand of wide range of SLAM positioning accuracy,a fused ICP and graph optimization algorithm is proposed. Through the ICP and graph optimization,data characteristic of the same site in different time is extracted,loop closure is formed,and global optimization based on leastsquare is done. The method is tested with real datasets. Result shows that the method can decrease mapping error by some certain and increase global accuracy demand of SLAM,mean error is 1. 0 m,and least error is 0. 2 m.
  • ZHONG Jian-cheng,PENG Wei
    Computer Engineering. 2014, 40(9): 233-237. https://doi.org/10.3969/j.issn.1000-3428.2014.09.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Programming(GEP)is prone to produce redundant valid strings of chromosome,which impacts its performance dramatically. To address the problem,this paper proposes a new strategy named Memory Population Reducing Redundant GEP(MPRRGEP), which checks repeat valid strings of chromosome and reduces the redundant in memory. It analyses the influence of valid strings in both single-gene and multi-gene chromosome on the performance of GPE. And a method that can effectively measure the validity of individual chromosome is designed. By using Hash technique,the index of the data of valid individual chromosome is constructed in memory so as to reduce the amount of times that compute the same valid strings and improve the performance of GEP. Experimental results show that the method can averagely save the computing time for above 60% .
  • WANG Xiao-lin,ZHEN Li-hua,YANG Si-chun,TAI Wei-peng,ZHENG Xiao
    Computer Engineering. 2014, 40(9): 238-242. https://doi.org/10.3969/j.issn.1000-3428.2014.09.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Since the performance of the classifier generated by the fixed training set is not satisfactory and can hardly track the users’ needs dynamically,in this paper,the incremental Bayes idea is introduced in question classification. In order to eliminate the feature redundancy in the training set,Genetic Algorithm(GA) is used to select the optimal features to amend the classifier. In the process of classifier learning,the parameters are modified dynamically while the training set is expanded. The interrogative word, syntax structure, question focus words, and their first sememes are chosen as classification features. To verify the effectiveness of the proposed method,in the experiment,questions of different size at random are extracted from the corpus to build the incremental sets. Then classify the questions from the same test set based on different incremental sets. Experimental results show that the incremental Bayes classifier achieves better result. The classification accuracy of coarse classes and fine classes achieves 90. 2% and 76. 3% respectively. At the same time, it significantly optimizes the efficiency to some degree.
  • WANG You-zhao,PAN Fen-lan,HUANG Jing
    Computer Engineering. 2014, 40(9): 243-247. https://doi.org/10.3969/j.issn.1000-3428.2014.09.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the existing algorithms which do not use the whole four information space of Linear Discriminant Analysis(LDA) in solving the small sample size problem,a two-stage LDA face recognition algorithm based on Two Dimension Principle Component Analyses (2D-PCA ) is proposed. The small sample size problem is solved by a subtraction to estimate the inverse matrix of the eigenvalues matrix of the singular with-class scatter matrix and betweenclass scatter matrix. Thus,the projection subspaces resulting from continuously using the traditional Fisher criterion and a modified Fisher criterion,are concatenated to obtain the optimal projection space including whole four information space of LDA. To reduce the computational complexity,the 2D-PCA is used to preprocess on input samples. The recognize rates of the proposed algorithm on ORL and YALE database are 92. 5% and 95. 8% which are higher than other LDA algorithms despite the slightly increase of training time.
  • GAO Jing-yang,ZHAO Yan
    Computer Engineering. 2014, 40(9): 248-251,256. https://doi.org/10.3969/j.issn.1000-3428.2014.09.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because the classification algorithm based on the differences among samples,a new method is proposed which adds a new property value dmin into each sample in order to increase the differences. Besides,according to the situation that samples belonging to different classes are sampled unevenly in the sampling phase,a new method called even sampling is proposed to keep the proportion of difference classes invariant. For the purpose of inhibition of the increment speed of misclassification samples,a new method is proposed which brings in a variable count(n)to record the times of misclassification. In the word,an improved algorithm called Sampling equilibrium & Weight adjustment & Add attribute Adaboost ( SWA-Adaboost ) is proposed. Using the 6 datasets belonging to machine learning database of University of California in USA,the paper runs experiments to compare the original Adaboost with SWA-Adaboost. Experimental results show that SWA-Adaboost has better generalization performance than the original Adaboost and the average decrease of generalization error is 9. 54% .
  • GONG Qu,MA Jia-jun
    Computer Engineering. 2014, 40(9): 252-256. https://doi.org/10.3969/j.issn.1000-3428.2014.09.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Two-dimensional Locality Preserving Projection(2DLPP) ignores the face sample local information between neighborhood and the correlation between the extracted feature matrix component problems. Aiming at this problem,the minimum correlated supervision 2DLPP algorithm based on Maximum Margin Criterion(MMC) is proposed. Between class local scatter matrix and within class local scatter matrix are brought in,which maximize the trace difference of scatter matrix to increase the sample’s between-class scatter and decrease within-class scatter,then manifold structure of data can be characterized better. It calculates the covariance matrix of extracted feature matrix, reduces the feature redundant. Experiments on Yale and ORL face database are done,when the train sample number is 5,the result shows that the highest recognition rates are 92. 5% and 96. 2% ,the recognition rate is higher than traditional 2DLPP algorithm,Twodimension Principal Component Analysis(2DPCA) algorithm,Two-dimension Linear Discriminate Analysis(2DLDA) algorithm and Two-dimension Maximum Margin Criterion(2DMCC) algorithm. It also analyses the mean and variance of recognition rate to prove the stability of the improved algorithm.
  • ZHU Wen-chao,XU De-zhang,FANG Tao
    Computer Engineering. 2014, 40(9): 257-262. https://doi.org/10.3969/j.issn.1000-3428.2014.09.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The measurement accuracy of the sensor which works on the environment of the dynamic load can be seriously affected by the pollution of noise signal. A new improved particle filtering which owns hierarchical optimal steps is proposed. This algorithm takes the rectangular thin plate of dual-E elastic body six-axis force sensor as the research object. The nonlinear state-space model based on the relationship between the response of sinusoidal excitation force and the strain is established. According to degenerate level,the sample sets can be divided into two parts. Based on the weeds breeding algorithm,the new measurement can be transferred to high likelihood region. Based on the Thompson-Taylor algorithm,the new particles set produced by random combinations of particles are achieved through polymerization resample of transferred particles. Simulation results indicate that the new algorithm can adjoin the real posterior probability density with smaller estimated error. It can effectively enhance the measurement accuracy of six-axis force sensor and maintain the real-time performance.
  • LIU Ming-xing,JIN Jian,LI Xiao-dong
    Computer Engineering. 2014, 40(9): 263-268. https://doi.org/10.3969/j.issn.1000-3428.2014.09.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A threat that Domain Name System(DNS) data is tampered by hackers endangers DNS applications. Due to the hidden characteristic of this threat,a quick and effective method to find dangerous changes in DNS data is needed urgently. Regarding to the problem,this paper proposes a method to monitor the DNS data based on machine learning,by which dangerous change in DNS data can be found quickly. Some domain names whose data are changed are chosen from a number of domain names,and their relevant information is individually analyzed in order to produce a tuple that is represented by a multi-dimensional attribute vector,which contains literal characteristics,forward-inverse match and so on. After that a class is labeled depending on whether the changes are bad or not so that an instance containing the tuple and their class label is built and consequently a training set is built. By analyzing the training set the two classification algorithms,decision tree and Support Vector Machine(SVM),build classifiers,which are used to detect whether changes in DNS data are dangerous or not. The 10-fold cross-validation is used to validate the two classifiers. It is found that the classifiers do well in finding dangerous changes in DNS data,in which the present results show that the classifier can reach a good precision,and their weighted average accuracies are 73. 8% and 82. 4% .
  • FU Zhong-man,ZHANG Hui,LI Miao,LIU Tao
    Computer Engineering. 2014, 40(9): 269-274,279. https://doi.org/10.3969/j.issn.1000-3428.2014.09.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A novel Hash algorithm is proposed in this paper for network processor application. It resolves Hash collision problem by constructing new look up table and new two-level Hash function. The software processing and hardware lookup flow of Hash table are descripted,and the learning process and ageing machine for entry of table are designed for simplifying the entry updating operation. For different engineering applications,the algorithm sets up different Hash table, which makes the efficience of memory utilization improved and the tradeoff between memory and processing speed optimized. Simulation results show the algorithm works well despite of the number of table entry and the size of keyword.The average length of look up’s success is 2 and the memory access times is reduced dramaticlly. The look up speed of micro-engine is improved to 25 Mb / s,satisfing the requinrement of 20 Gb / s bandwidth performance of network processor.
  • QIN Feng,TIAN Jie,CHENG Ze-kai
    Computer Engineering. 2014, 40(9): 275-279. https://doi.org/10.3969/j.issn.1000-3428.2014.09.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Passing ball action runs through the RoboCup simulation games,both teams spare no effort to win the game. The paper studies the connection between passing ball action and the game deeply. It puts forward to adopt the idea of data mining,analyzes games’ log files by C language program in order to collect the required passing ball data,divides passing ball into 5 types which are seen as independent variables and see score as dependent variable,then establishes mathematical model combining with Partial Least Square(PLS). A few relevant figures are used to analyze and verify the experimental result which comes from SIMCA-P. The result shows that with 72. 8% of independent variables information and 74. 4% of dependent variable information,the VIP values of 5 independent variables to dependent variable are as follows:0. 081 14,0. 996 66,1. 028 9,1. 088 06,1. 325 73. After linking theoretical result with practical scene,it is concluded that long pass plays a major role in a game for passing ball.
  • WU Yao,HUO Liang-sheng,LIU Yu-de,GU Zu-bao
    Computer Engineering. 2014, 40(9): 280-283. https://doi.org/10.3969/j.issn.1000-3428.2014.09.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the tags collision problem in Radio Frequency Identification(RFID) system where the maximum size of frame is limited, this paper proposes a new Grouping Part time Slot frame Prediction ALOHA ( GPSPA ) algorithm. Tags are divided into smaller groups considering the limited frame size’ s capability. Part slots prediction scheme is used in identification to decide whether to change the frame size immediately. If the empty or collision slots percentage exceeds the threshold value,the frame size is changed promptly. Simulation results show that the proposed algorithm can increase the system efficiency and consume fewer slots than previous work. Besides,the influence of the parameters of the algorithm is discussed by simulation tests. The system identification efficiency can maintain 35. 58% , approximating to the limit value,where dynamic tags are changing greatly. The proposed algorithm provides a good solution for RFID systems where the tags are changing within a wide range and the frame size is limited.
  • YU Shi-dong,DAI Yong,WANG Qiu-zhen,LI Xuan,REN Kun
    Computer Engineering. 2014, 40(9): 284-290. https://doi.org/10.3969/j.issn.1000-3428.2014.09.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the requirements and features of data transmission in a classroom system with writing teaching equipment for terminal,across analyzing the limitations of other protocols under this situation,an embedded LAN protocol is proposed based on Ethernet,which is suitable for the classroom system CSELP. Both the real-time performance and transmission efficiency are improved. Data frame expansion of 8 Byte achieves recognition,retransmission,flow control and other functions to provide connection-oriented reliable transmission service,and the data processing and sates method are simplified. The size of available bandwidth with improved bandwidth estimate algorithm is estimated and the congestion window by the prediction window size change in congestion avoidance is adjusted. The new design retransmission queue is used to implement the package-based ACK mechanism. Example application result shows that the protocol can satisfy the application requirements of the writing teaching classroom system,and can spread to other nonwriting teaching classroom system.
  • WANG Zhao-wen,JIANG Ze-jun,CHEN Jin-chao
    Computer Engineering. 2014, 40(9): 291-294,299. https://doi.org/10.3969/j.issn.1000-3428.2014.09.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To the problem of imperfection in real-time property of memory management under Linux system,this paper designs a solution to improve the timeliness. It works in three aspects:establishing a mapping relationship between virtual address and physical address to reduce the switch between the user mode and kernel mode,locking memory to avoid page exchanging,improving the original algorithm of memory management to remove the nondeterministic operations. The modified memory management algorithm is based on the principle of partitioned management and best fit,whose time complexity is O(1). Experimental results show that this solution is a good way to improve the performance of memory management,in the environment of memory tension,its effect is more obvious,and performance improvement rate can reach 49. 5% . It meets the requirement of real-time.
  • HU Sheng,CHEN Peng,LAN Xiao-ke
    Computer Engineering. 2014, 40(9): 295-299. https://doi.org/10.3969/j.issn.1000-3428.2014.09.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the field of embedded video processing,due to the critical real-time requirement of the video,this paper proposes an FPGA-based multi-channel video compositing and de-noising method. This paper contains concrete realization scheme that four-channel video is combined to one-channel video and de-noising algorithm of median filtering to onechannel video. The video is buffered by DDR2 SDRAM,and the hardware structure and logic structure of median filtering algorithm are demonstrated. The Verilog language is used to describe the overall system design,and a logic synthesis and hardware test is implemented on Xilinx FPGA. Experimental results show that the design uses the FPGA’ s hardware parallelism and pipeline technology,and the performance of real-time processing for the video is entirely achieved.
  • DENG Yi-gui,WU Yu-ying
    Computer Engineering. 2014, 40(9): 300-304. https://doi.org/10.3969/j.issn.1000-3428.2014.09.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development of Internet,many negative effects come out as the exponential growth of various information resources,which means that a more secure and healthy network environment should be constructed right now.In order to solve this problem,this paper proposes a Sensitive Word Decision Tree for Information Filtering Algorithm (SWDT-IFA) for content-based Web pages. The algorithm takes no consideration of dictionary and word segmentation, builds the foundation on the sensitive words decision tree,lets the web text retrieval decision tree in form of data stream, records word frequency,regional information and sensitive level,and calculates the sensitive degree of the text to filter the sensitivity. Experimental results show that the SWDT-IFA algorithm has precision ratio and recall ratio,and low time complexity which can require the real-time demand of network environment.
  • LI Zhao,ZHENG Hong
    Computer Engineering. 2014, 40(9): 305-311. https://doi.org/10.3969/j.issn.1000-3428.2014.09.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The changing of run time and resource consumption with the number of the Processing Element (PE) is contrary. The rules of run time and resource consumption with the number of PE are analyzed. And the variation trend for resource consumption and run time with the number of PE is got. The optimization objective function based on run time and resource consumption is established. The existence and uniqueness of the minimum for optimization objective function are proved. The multi-PE optimization method based on run time and resource consumption is proposed. This method can realize the run time optimization with the least resource consumption. In order to validate the method,the optimal design of the calculation of the gray level co-occurrence matrix and single float matrix multiplication are proposed. Experimental results indicate that the runtime of gray level co-occurrence matrix is at most 6. 79 times than the old method. The integrated result about runtime and area consumption is 3. 3 times than the old method. The optimization of runtime and area consumption is implemented.
  • LIU Kai,ZHOU Xue-zhong,YU Jian,ZHANG Run-shun
    Computer Engineering. 2014, 40(9): 312-316. https://doi.org/10.3969/j.issn.1000-3428.2014.09.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional Chinese Medicine(TCM) medical records are the important data resources of the TCM medical research. The main form of them is still text now,and it is necessary to extract the structured information from the medical records,while named entity extraction is the basic step. It makes 413 copies of manually labeled medical records in Chinese text and four types of feature templates to study about the named entity extraction practice such as symptoms, diseases and incentives. It compares the results of TCM medical records named entity extraction by Conditional Random Field(CRF ), Hidden Markov Model ( HMM ) and Maximum Entropy Markov Model ( MEMM ). Combined with appropriate feature templates,CRF has well performance of F1:symptoms 0. 80,the name of the disease 0. 74,incentives 0. 74. Compared with HMM and MEMM,CRF has the highest precision and recall rate. This preliminary shows that CRF is an applicable method of the Chinese medical records named entity extraction
  • YANG Hao,JIANG Nan,DU Cheng-lie
    Computer Engineering. 2014, 40(9): 317-320. https://doi.org/10.3969/j.issn.1000-3428.2014.09.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Timer with a certain precision is often required in the process of developing the system. In the Windows real time extending process,aiming at the problem of insufficient original timing accuracy and the fluctuation problem,this paper presents a high precision timer based on local Advanced Programmable Interrupt Controller(APIC). Making use of the counting register programs CPU sheet structure of APIC,it constructs high precision clock effectively,and uses the kernel driver construction scheduling management,memory mapping to improve data transmission speed of user state to guarantee real-time kernel to provide real-time. DLL provides a set of interface for users. Experimental results show that the scheme can effectively solve the problem of timing precision,and it has good usability.