Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 July 2013, Volume 39 Issue 7
    

  • Select all
    |
    Networks and Communications
  • LIU Bai-Lu, YANG Ya-Hui, CHEN Qing-Ni, ZHANG Yang
    Computer Engineering. 2013, 39(7): 1-6. https://doi.org/10.3969/j.issn.1000-3428.2013.07.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    It is important to improve the real-time of online intrusion detection system in the early stage of network intrusion. Aiming at the early detection on network intrusion that detects the anomaly traffic at beginning phase of network attack, feature is extracted to describe the behavior of network invasion, and the algorithm of extraction is designed. An online intrusion detection system is represented based on the algorithm of GHSOM. Experimental result proves that most attacks’ early detected ratio is above 80% used by this method, and early detection optimizes speed and efficiency of online intrusion detection system.

  • ZHANG Jia-Jie, OU Feng, SHU Zheng, XU Hua-Qiu, YU Zhi-Yi
    Computer Engineering. 2013, 39(7): 7-10,15. https://doi.org/10.3969/j.issn.1000-3428.2013.07.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the performance of multi-core processor, based on traditional hardware accelerator, this paper presents a novel computing array design scheme. The communication ports between the computing array and the processor are mapped in the address space of extended register file, which makes the computing array and the processor tightly coupled. The computing units are connected by Network-on-Chip(NoC) which enables the computing array be flexibly configured and highly shared by the multi-core processor. A 1 024-point Fast Fourier Transform(FFT) and an H.264 decoder are implemented on the experimental platform, and results show that the scheme can improve the performance and power consumption significantly compared to pure software solution.

  • CHENG Yong-Sheng, DONG Yu-Han, ZHANG Hua-Dan, LIN Xiao-Kang
    Computer Engineering. 2013, 39(7): 11-15. https://doi.org/10.3969/j.issn.1000-3428.2013.07.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Device-to-Device(D2D) communication attracts much research attention due to its ability to increase the efficiency of cellular networks. However, performance of D2D communication in multi-cell cellular systems is not fully addressed. Aiming at this problem, this paper analyzes the uplink capacity when D2D communication is enabled in a multi-cell Code Division Multiple Access(CDMA) system. System level simulation is conducted to evaluate the uplink capacity gain of D2D link and its affecting factors. Simulation results show that D2D communication provides significant capacity gain. In addition, the performance gain decreases with D2D link distance and increases with the position of the link in system, but stays almost constant over different system loads.

  • HUANG Shi-Wei, WANG Yun-Feng
    Computer Engineering. 2013, 39(7): 16-20,25. https://doi.org/10.3969/j.issn.1000-3428.2013.07.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    An efficient pipelined architecture is presented in this paper for solving the problem of high hardware cost of R-L modular exponentiation algorithm, which is formed of Montgomery modular multiplication built by using pipelining technique. The parallel calculation of algorithm can be executed and the hardware cost can be also reduced in the new architecture. Besides, two extra pre-processing and post-processing for converting an integer to its N-residue format in the conventional modular exponentiation algorithm are avoided to reduce the iteration time. The result shows that the new architecture can achieve high data throughput rate of more than 14 Mb/s on Xilinx Field Programmable Gata Array(FPGA) of Virtex-2 series when performs modular exponentiation, while occupy only about half hardware resources when compared with the conventional parallel architecture.

  • SHU Zheng-Yu, CUI Meng, LIU Lin
    Computer Engineering. 2013, 39(7): 21-25. https://doi.org/10.3969/j.issn.1000-3428.2013.07.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The research on information collection of real-time rode traffic based on floating car is not perfect currently, with the problems of the complexity of collection content, inaccuracy of collection information. This paper presents a method for information collection of real-time road condition for the simplified road network model based on the GPS terminal, which includes information collection method for simplified network model considering road segment travel time and stopping time on the road. GPS terminals are in charge of the calculation of collection content and map matching rather than the server. Experimental results show that the method simplifies collection content and makes collected information more accurate, which can not only reduce the high load of the server-side, but also provide convenience for the transmission of collection information.

  • GAO Jun, HAO Zhong-Xiao
    Computer Engineering. 2013, 39(7): 26-30,44. https://doi.org/10.3969/j.issn.1000-3428.2013.07.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Based on moving object probabilistic nearest neighbor query in free space, the concept of constrained network moving object Probabilistic Nearest Neighbor query(CNPNN) is put forward, and the CNPNN algorithm based on network probabilistic Voronoi diagram is proposed. The probabilistic measure based on the network distance is used to derive the network probabilistic Voronoi cells of the uncertain objects, and the network probabilistic Voronoi diagram is built to cover the constrained network. R+ tree is used to index network probabilistic Voronoi cells for decreasing search time. Network probabilistic Voronoi cell containing query object is located to acquire query object’s most likely Nearest Neighbor(NN). Experimental results show that the time complexity of algorithm is O(n2+mlogmn), has a better performance under certain conditions.

  • LI Peng, HUI Sui, SUN Qiang, ZHANG Quan-Bing
    Computer Engineering. 2013, 39(7): 31-34,50. https://doi.org/10.3969/j.issn.1000-3428.2013.07.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of the holes caused by exposing regions in virtual viewpoint rendering, this paper proposes a virtual viewpoint rendering method based on image pyramid inpainting. The viewpoint warping equation is utilized to generate a virtual viewpoint. The image pyramid method is employed to restore holes inside the warped view. Depth information is incorporated into the cost function for Gaussian plus zero-canceling filter and down-sampling, low resolution image can be up-sampling, the expanded image information can be used to fill the holes. Experimental results show the proposed algorithm has superiority in both subjective details and the results of Peak Signal to Noise Ratio(PSNR), and virtual viewpoint of image rendering has not distortion, it can restrain the generation of false image in the edge.

  • CHEN Zhi-Wei, DU Min, YANG E-Chao, LI Zi-Chen
    Computer Engineering. 2013, 39(7): 35-39. https://doi.org/10.3969/j.issn.1000-3428.2013.07.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the ciphertext data calculation and the privacy protection of private cloud users issues in cloud computing environment, a homomorphic encryption scheme based on the homomorphic characteristics of RSA and Paillier is designed. Considering the fact that lacking an effective fully homomorphic encryption system at present, single homomorphic encryption system is used to construct somewhat homomorphic cloud computing scheme, which can satisfy multiplicative homomorphism and additive homomorphism and solve the ciphertext processing problem of the public cloud server successfully. It can achieve homomorphic calculation in the process of the ciphertext scale control, which has a large depth of the homomorphic calculation and completes the calculation for the data of floating-point types by an appropriate mapping. Simulation results show that compared to directly computing the plaintext, the proposed scheme needs more computing time, but it has higher correctness and security, and can meet various forms of computation requests in cloud computing environment.
  • GUO You-Ming, WANG Feng, TANG Hua, CHEN Lei, XIAO Li-Ban
    Computer Engineering. 2013, 39(7): 40-44. https://doi.org/10.3969/j.issn.1000-3428.2013.07.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem which current monitoring system has in reflecting the mass nodes and high coupling of cloud computing cluster, this paper designs a cloud computing monitoring system based on thermodynamic space theory. By projecting node status parameters onto phase space, it converts node parameters change into the movements of projective points in phase space. Consequently, macro monitoring of cloud computing cluster is realized by utilization of phase space images and parameters. Tests results show that the monitoring system can real-time and effectively reflect the overall loading and working status of cloud computing cluster, accomplish monitoring the cluster from a macro level.
  • CHEN Jia-Jie, JIANG Gong, WANG Su
    Computer Engineering. 2013, 39(7): 45-50. https://doi.org/10.3969/j.issn.1000-3428.2013.07.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming to data source’s decentralized characteristic in cloud storage, taking consideration the problem of the relationship between extraction classify rule number and each agent and whole system’s error rate, by using method of extracting the rule in distributed agents and merge rule set in center rule database under cloud storage situation, this paper proposes a guideline of the decreasing error rate of each agent and error rate upper limit of whole system with increasing extraction classify rule number under cloud storage distribution situation. Though formal proofing and theoretical derivation, the correctness of the proposed criterions is proved. The correctness of theoretical derivation is verified by the experiment, and experiment also shows that difficult between the return classification accuracy rate of distribution extract method and centralized extract method are approaching to a constant which proves the feasibility of the distribution extract method in this paper.
  • TUN Hai-Shuang, ZHANG Liang, LI Jie-Hui
    Computer Engineering. 2013, 39(7): 51-54. https://doi.org/10.3969/j.issn.1000-3428.2013.07.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When Infrastructure as a Service(IaaS) cloud computing center’s resources need to be reallocated for some reason, Minimization of Migrations(MM) strategy has more migrations when choosing migration set in processing live migration. It increases the violation probability of Service Level Agreement(SLA) and generates a lot of useless energy consumption. Considering this problem, it proposes a modified strategy Find-Migration-Search(FMS). Through the history data of services, it can get the status of each virtual machine. Then the program scans each physical machine with two phase. CloudSim simulation results indicate that the modified strategy FMS can not only lower the times of virtual machines’ migration and energy consumption, but also satisfy services’ SLA much better, and implements resources optimization deployment under IaaS model.
  • LIU Ting-Ting, DIAO Yong
    Computer Engineering. 2013, 39(7): 55-58. https://doi.org/10.3969/j.issn.1000-3428.2013.07.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To make users check the integrity of cloud data in time and retrieve the data even when some part of the data are destroyed, a privacy-preserving multi-copy integrity verification scheme is proposed. The multi-copy mechanism is raised based on Parakh secret sharing scheme to ensure the data retrievability. A segment mechanism is presented to separate user’s identity information and the available data. The storage authentication code is designed to establish contact between data and its owner which prevent the attackers obtaining the relationship between the user and his data. A challenge-response protocol is proposed based on multi-prover zero-knowledge proof. Analysis shows that the scheme is space efficient for cloud servers and protects the identity privacy against attackers, which resolves the problems arose by the outsource service mode of cloud computing and by the untrustworthiness of the cloud service provider.
  • ZHANG Xiao-Qiang, HE Zhong-Tang, LI Chun-Lin, JIAN Qiong-Fen, ZHANG Heng-Chi
    Computer Engineering. 2013, 39(7): 59-62,72. https://doi.org/10.3969/j.issn.1000-3428.2013.07.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of heterogeneity of user requirements in cloud resource provision, this paper proposes a cloud resource provision strategy based on non-cooperative game utility optimization. The strategy provides resources according to user’s bidding by proportional sharing mechanism. The bidding function is solved, and the existence of Nash equilibrium solution of optimal bidding set is proved. Experimental results show that the strategy can reflect the floating relation between user requirement and resource price, regulate the user’s bidding and resource allocation, and get better performance on fairness, equilibrium and rationality.
  • WEN Sha, LUO Yu, CHEN Chen
    Computer Engineering. 2013, 39(7): 63-66. https://doi.org/10.3969/j.issn.1000-3428.2013.07.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When building a disaster tolerance system, the migration of large mounts of data will cause long time stagnation of local business. In addition, traditional disaster tolerance systems transmit data in the way of pushing, which results in an excess of local memory data accumulation. Aiming at solving these two problems, this paper designs and implements a dynamic disaster tolerance system based on storage virtualization. The system adopts techniques of dynamic image loading and asynchronous data transmission based on pulled-log. Test results show that, the system can render disaster tolerance service for the local without changing the local storage architecture, ensure data consistency and produce less than 20% of the performance penalty.
  • QIU Qing, WANG Yi-Ji, MA Hang-Kong, LI Xiao-Yong
    Computer Engineering. 2013, 39(7): 67-72. https://doi.org/10.3969/j.issn.1000-3428.2013.07.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing dissemination methods merely focus on single optimization objective and fail to ensure reliability and efficiency simultaneously, this makes them be inadaptable to meet the need of disseminating reliably and efficiently in the emergency environment. A reliable dissemination method based on dynamic multicast tree called RDBDMT is proposed. RDBDMT clusters and identify nodes in a hierarchical manner according to the delays between them, and based on which a hierarchical overlay is built. RDBDMT adopts prefix matching routing which is based on a dynamic multicast tree according to their identifications. Theoretical analysis and experimental results show that, RDBDMT is much more reliable and efficient than the existing methods, even in the emergency environment, where a large number of messages are published simultaneously in a very short time.
  • DIAO Xiao-Yong, YANG Yang, WANG Ning
    Computer Engineering. 2013, 39(7): 73-75,82. https://doi.org/10.3969/j.issn.1000-3428.2013.07.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Song re-uploading had been shared wastes network bandwidth and storage, which needs to use data de-duplication technology. However, the current approach to de-duplication based file bit-feature does not recognize the same song after signal processing or compression. Aiming at this problem, this paper proposes a near de-duplication method of massive MP3 files based on acoustic fingerprint. It combines the certainty of message digest with the robustness of the acoustic fingerprint, after Bloom Filter(BF) de-duplicate data based on the message digest, then reduces acoustic fingerprint for the secondary near de-duplication based on the dimensionality. It ensures efficient at the same time, greatly improves the de-duplication ratio. Experimental results show that this method can improve the de-duplication rate by one time than Content-defined Chunking(CDC) method, and has good extensibility.
  • CHEN Dong-Meng, LIU Jian, WANG Dong-Qi, XU Xiao-Wei
    Computer Engineering. 2013, 39(7): 76-82. https://doi.org/10.3969/j.issn.1000-3428.2013.07.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the high time and space complexity and physical machines out of memory, traditional clustering algorithms usually can not effectively analyze and deal with large data network. To solve this problem, this paper proposes a distributed clustering algorithm for network data based on MapReduce model. It adopts the theory of MRC theory to design limited round number of MapReduce to control the time in shuffle stage, and utilizes the Map inner merging technology to control network flow. It proposes an idea that if merge the intermediate results, only merge clusters and do not consider the internal nodes, which can control memory overhead. It utilizes the data sets generated by simulation to do experiment. Experimental results show that when the data size and cluster scale increases, the CAMR algorithm has good speedup ratio and scalability.
  • ZHOU Fu-Beng, XIE Jiang, DING Qiu-Lin
    Computer Engineering. 2013, 39(7): 83-85,93. https://doi.org/10.3969/j.issn.1000-3428.2013.07.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a second block storage method is proposed to solve the problems in current distributed processing environments in which data redundancy factor is too high while data availability is low. With the algorithm based on improved Reed-Solomon(RS) coding, the blocks in distributed system can be divided into sub-blocks, then the sub-blocks are encoded and stored in different computers to complete the redundancy of data. Experimental results show that data redundancy and running time are effectively reduced and data availability is increased by this method.
  • BANG Xin, TAN Zhang, HUANG Wen-Jun, WANG Xin-Hua
    Computer Engineering. 2013, 39(7): 86-89. https://doi.org/10.3969/j.issn.1000-3428.2013.07.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A design for Android-based on industrial control mobile monitoring software is proposed. By using the object-oriented and hierarchical method, it developes a monitoring software with industrial flow chart, alarm pushing and login authentication functions. To ensure the system compatibility, mobile terminal severs is erected on the basis of factory original network topology. Combining the Android NDK development, multi-level page table mapping and asynchronous network transmission, the software enhances the tags data transmission speed to ensure real-time. Test results show that the software has a good practicality, it can bring convenience for field operators.
  • GU Zong-Hua, WANG Chao, SUN Zheng, LI Gong
    Computer Engineering. 2013, 39(7): 90-93. https://doi.org/10.3969/j.issn.1000-3428.2013.07.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    AUTomotive Open System ARchitecture(AUTOSAR) is an open and standardized automotive software architecture which is widely adopted by the automotive industry. This paper presents a simulation tool for AUTOSAR design models verification before deployment on the target hardware platform. The tool aims to ensure maximum faithfulness of the software by simulating software on the target electronic control unit at the source-code level. The results show that the simulator can give the developer more confidence on the correctness of the overall software stack than traditional simulation techniques, and improves the efficiency of system development.
  • HUANG Yuan, FU Xiao-Dong, GU Nan, DAI Zhi-Hua, MA Yu-Qian
    Computer Engineering. 2013, 39(7): 94-98,114. https://doi.org/10.3969/j.issn.1000-3428.2013.07.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Calculating the control flow distance is of great significance to the workflow’s reuse. This paper puts forward a method based on process control structure to calculate the workflow’s control flow distance. It introduces the process that separates control nodes from the workflow and generates control flow diagram. Based on the distance between the control nodes, it builds a model measuring the distance of the workflow’s control flow through control flow diagram, and proves distance-measuring model meeting the nature of the reflexivity, symmetry and triangle inequality in theory. Example analysis result shows that the method can reflect the distance of workflow.
  • LI Yang
    Computer Engineering. 2013, 39(7): 99-101,118. https://doi.org/10.3969/j.issn.1000-3428.2013.07.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of taking what scheduling strategy to call the component services of a composed Web service to run the service fastest, a strategy is proposed. In this strategy, running a composed Web service is molded as an AOE-net, and original queues of component services are separated from the net, order table of queues and difference value matrix of queues are used to merge these original queues into fewer final queues, which can be called in parallel. Each queue is in the charge of a scheduling program which firstly calls component service whose calling conduction is satisfied. Result shows that the service time of component arrangement is short and parallel scheduling can reduce run time of composed service.
  • LI Han, JIANG Na, DU Cheng-Lie
    Computer Engineering. 2013, 39(7): 102-105,122. https://doi.org/10.3969/j.issn.1000-3428.2013.07.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper analyzes the factors that affect the real-time property of memory management under Windows platform, and presents a solution to improve the real-time. It works in three aspects: establishing a mapping relationship between virtual address and physical address to avoid the switch between the user mode and kernel mode; locking pages in the physical memory to avoid page missing and page exchanging operation; improving the original algorithm of memory allocation to remove the nondeterministic operations. Experimental results show that this solution is a good way to improve the efficiency of memory management under Windows and maks the use of time in the memory operation steady, improvs the real-time of memory management under Windows platform.
  • BIAN Li-An, HAN Chang-Cai, LI Yuan
    Computer Engineering. 2013, 39(7): 106-109. https://doi.org/10.3969/j.issn.1000-3428.2013.07.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the relay selection in cooperative communications, adaptive relay selection strategies are proposed for different power allocation schemes, where an additional relay can be employed when it is necessary. For Equal Power Allocation(EPA) among the source and relays, the search range of candidate relays can be effectively decreased by using the defined feasible two-relay region. For Optimal Power Allocation(OPA), the potential relays can be searched exhaustively by geometric programming. Numerical simulation results show that the proposed adaptive relay selection strategy with power optimization can significantly reduce the power consumption of the system compared with the traditional fixed selection strategy.
  • LV Yu-Hua, YU Ji-Guo, WANG Chen-Xi
    Computer Engineering. 2013, 39(7): 110-114. https://doi.org/10.3969/j.issn.1000-3428.2013.07.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Link scheduling is an important issue in Wireless Sensor Network(WSN). For the problem of Shortest Link Scheduling(SLS), this paper gives a constant approximation algorithm with linear power assignment under the physical interference model. All links of each link set in corresponding time slot meet the SINR threshold constraint with grid partition. The effectiveness of the algorithm and the approximate ratio are discussed through theoretical analysis. Simulation experimental results show that the algorithm has less time delay than TONOYAN algorithm.
  • DIAO Shi-Qin, DU Rong, LI Jian, LI Sheng-Gong
    Computer Engineering. 2013, 39(7): 115-118. https://doi.org/10.3969/j.issn.1000-3428.2013.07.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In cognitive radio networks, the interference caused by the secondary user may influence the communication of the primary user, even breaks off the communication. This paper introduces an interference model based on Poisson distribution, which considers the probability of spectrum-sensing based on the SNR. According to different network environment factors, like shadowing and multi-path fading, it gets the closed-form expression based on the interference model. The paper analyzes the interference outage probability based on the interference model. Simulation result shows that this interference model can reflect the interference in the real environment well and can be used to analyze the outage probability well. According to the result, different environment factors have double influence on the outage probability.
  • XU Yang-Kai, CAO Ji, CHEN Xiao-Qun
    Computer Engineering. 2013, 39(7): 119-122. https://doi.org/10.3969/j.issn.1000-3428.2013.07.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Non-line of Sight(NLOS) transmission error is the main difficulty for mobile user location in cellular wireless communication system. So a novel Time Difference of Arrival(TDOA)/Angle of Arrival(AOA) wireless position scheme based on the two step Kalman filter is proposed. The estimate of the Kalman filter is used to calculate the variance of NLOS, and it adjusts the Kalman filter parameters and mitigate the NLOS error of measurement. These preprocessed measurements are input to the hybrid location estimator implemented by Extended Kalman Filter(EKF). Experimental results show that this algorithm can mitigate the NLOS error, and compared with Chan algorithm, it can improve the positioning accuracy.
  • LI Zhe-Jing, WANG Wei, ZHANG Xiao
    Computer Engineering. 2013, 39(7): 123-126,132. https://doi.org/10.3969/j.issn.1000-3428.2013.07.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the limitation of the traditional Transmission Control Protocol(TCP) control protocol used in the heterogeneous network, in the use of grey correlation analysis on the basis of network parameters, this paper puts forward a TCP-N protocol based on the round delay jitter product differentiating packet loss. According to the measured round delay jitter product, this algorithm constructs membership function, it differentiates wireless error packet loss and network congestion packet loss according to the membership degree to the corresponding congestion control. Simulation experimental results show that compared with the traditional TCP protocol, TCP-N can more accurately differentiate the wireless error loss from congestion loss in heterogeneous network, improve the TCP throughput and bandwidth utilization, and improves network performance.
  • CAO Shen-Hao, LIU Shun-Lan
    Computer Engineering. 2013, 39(7): 127-132. https://doi.org/10.3969/j.issn.1000-3428.2013.07.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the sum-rate of two-way relay system, this paper proposes a joint relay selection and power allocation strategy based on in network coding two-way relay system model. A relay selection scheme of Maximized Minimum Channel Gain (MMCG) and Maximize Harmonic Average(MHA) is proposed. The optimal power allocation scheme between the sources and the relay in two-way relay system is proposed based on maximizing the sum rate of the system. Simulation results show that the proposed joint MMCG relay selection and the optimal allocation strategy can effectively improve 1.6 bit/s/Hz on the sum rate of system in two-way relay system with Physical-layer Network Coding(PNC) than joint BRS relay selection strategy.
  • TANG Jia-Dong, CA Meng
    Computer Engineering. 2013, 39(7): 133-136,141. https://doi.org/10.3969/j.issn.1000-3428.2013.07.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Classical clustering communication protocol of Low Energy Adaptive Clustering Hierarchy(LEACH) in Wireless Sensor Network(WSN) is analyzed. In order to extend the lifetime of network in LEACH, this paper proposes an improved cluster-head election algorithm. If the residual energy of each cluster-head is lower than the given threshold, cluster-head election is done in the whole network. This paper introduces a new strategy based on the relative density of nodes, the selection method of the threshold for electing cluster-head is optimized. If not, the election is done in the cluster, the protocol chooses new cluster-head nodes based on the node’s residual energy, position and dense grade. Experimental results indicate that the improved LEACH algorithm can save and balance the node energy consumption effectively, delay the first node’s death time, and extend the lifetime of network.
  • MA Chao, CHAN Hong, CHEN Juan
    Computer Engineering. 2013, 39(7): 137-141. https://doi.org/10.3969/j.issn.1000-3428.2013.07.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to lower delay of transmission and save energy at the same time, GFN-MAC that proposes cross periodic double- increased synchronization sleep scheduling is designed for the foundation layer of heterogeneous Wireless Sensor Network(WSN) that has a character for multi-hop cluster structure, periodic short data and detection data stream. CSMA protocol is used when heterogeneous WSN is clustering. GFN-MAC is run when heterogeneous WSN has clustered. This sleep scheduling makes nodes in different layer of a cluster run different cycle to sleep and wakeup. Simulation result shows that this protocol can reduce the transmission delay and energy.
  • DANG Xiao-Chao, TAO Gao-Gao, HAO Tie-Jun
    Computer Engineering. 2013, 39(7): 142-147,151. https://doi.org/10.3969/j.issn.1000-3428.2013.07.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the unbalanced problem of Wireless Sensor Network(WSN) every round node energy consumption in the clustering, this paper puts forward the Mobile Agent Based Multilayer Clustering(MABMC) multi-layer clustering algorithm, constructs Multilayer Clustering Energy Model(MCEM) energy model, through the Mobile Agent(MA) technology to elect each round cluster head and collect each round data. Simulation results show that, compared with Energy Efficiency Multilayer Clustering(EEMLC) and Low Energy Adaptive Clustering Hierarchy(LEACH) algorithm, MABMC algorithm reduces the energy consumption each round, more balanced energy consumption and prolongs the network life cycle.
  • XIAO Jing, ZHENG Geng-Sheng, FANG Yong, CHEN Di
    Computer Engineering. 2013, 39(7): 148-151. https://doi.org/10.3969/j.issn.1000-3428.2013.07.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of energy consumption in Wireless Sensor Network(WSN), this paper gives a Routing Based on Clustering Self organizing map and Chain(RBCSC) in WSN. Based on the Low Energy Adaptive Clustering Hierarchy (LEACH) protocol, the key algorithm of the protocol is that the network is clustered by Self Organizing Mapping(SOM), and chained by greed algorithm, it improves the clustering of LEACH protocol. Simulation results show that the RBCSC protocol is less energy consumption and longer survival time than those of LEACH protocol.
  • LI Yu-Na, CENG Xin-Bin, HE Jia-Ming
    Computer Engineering. 2013, 39(7): 152-155. https://doi.org/10.3969/j.issn.1000-3428.2013.07.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cell-edge users are vulnerable to the co-frequency interference of the neighboring cell-edge users in the cellular and Ad hoc hybrid network, for this problem, a subcarrier allocation scheme in order to mitigate the Inter-cell Interference(ICI) is proposed. This scheme can enhance the capacity of the users near the cell-edge area. Key features of the proposed scheme are the subcarrier allocation in a user selective manner considering the selectivity of the channel gain and subcarrier reuse minimization. Simulation results show that the proposed scheme yields better than Soft Frequency Reuse(SFR) scheme and Fractional Frequency Reuse(FFR) scheme for improving the throughput of the cell-edge users.
  • WANG Xiao-Jian, LIAO Xiao-Feng, HUANG Hong-Yu
    Computer Engineering. 2013, 39(7): 156-160,164. https://doi.org/10.3969/j.issn.1000-3428.2013.07.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Rainbow table method is a time-memory trade-off approach for reversing one-way cryptographic functions. Its cracking time is restricted by hardware performance and storage space. In view of this issue, this paper proposes an updated method which cuts down the number of reduction functions and prolongs the precomputation time so as to shorten the cracking span. By decreasing the number of reduction functions, the amount of the search paths in the process of table look-up drops off and the computation amount decreases accordingly. The increase in precomputation time is used to optimize the table structure for reduction of duplicate data. The success rate of cracking can be ensured. Experimental results show that the work can save over 30% of the cracking time without enlarging storage space or upgrading hardware.
  • ZHANG An-Bin, YUE Yun-Tian, ZHANG Chuan-Fu
    Computer Engineering. 2013, 39(7): 161-164. https://doi.org/10.3969/j.issn.1000-3428.2013.07.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the computational complexity of wet paper code, this paper presents a fast embedding strategy, on this basis, a fast adaptive image of the wet paper steganography scheme is given. Using the Hilbert curve to choose the feat pixel to carry information, the scheme uses the fast embedding strategy as the size of group reducer and bit-control to embedding secret in the map based on amend matrix. Experimental results indicate, for each packet, the existence rate of result is improved about 0.5%, also the probability of reducing Hamming weight of modified vector is 73%. Compared with Writing on Wet Paper(WWP), this scheme improves the speed obviously and the quality of stego image is better.
  • YAN Fa-Wen, HUANG Min, WANG Zhong-Fei
    Computer Engineering. 2013, 39(7): 165-168,172. https://doi.org/10.3969/j.issn.1000-3428.2013.07.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In allusion to hazards of the Internet abnormal flow, such as Web content is difficult to manage, swallowed network bandwidth, and the spread of the virus continues to expand, this paper puts forward the detection method combining Bloom Filter(BF) algorithm with several abnormal flow behavior. It analyzes the BF algorithm, sampling method and common abnormal flow behavior in Peer-to-Peer(P2P) network, and detects the flow based on the combination of the BF high space efficiency and sampling method, and counts these flow behaviors in order to detect and control abnormal flow effectively. Experimental results show that the method accelerates the detection speed, and improves the accuracy.
  • DONG Xin-Feng, ZHANG Wen-Zheng, ZHOU Yu, CAO Yun-Fei, MU Dao-Guang
    Computer Engineering. 2013, 39(7): 169-172. https://doi.org/10.3969/j.issn.1000-3428.2013.07.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The present methods of constructing optimal algebraic immune Boolean functions are mostly based on the support set. The methods by algebraic normal form are few. This paper gives a method of constructing optimal algebraic immune Boolean functions by algebraic normal form, and studies the primarily cryptographic properties of these functions. Such as algebraic degree, the algebraic immunity, the hamming weight, the nonlinearity etc. The number of the constructed optimal algebraic immune functions is given. By using the construction method, a large class of Boolean functions can be obtained with optimal algebraic immunity, which contains some special known results, and shows this method is more general, contains more functions with maximum algebraic immunity order.
  • CHEN Zi-Beng, JIAN Song-Rong
    Computer Engineering. 2013, 39(7): 173-176. https://doi.org/10.3969/j.issn.1000-3428.2013.07.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the information security problem in Wireless Sensor Network(WSN), this paper proposes a multi-factor identity authentication security scheme, which includes identity authentication, session key generation and distribution of users and sensor nodes, etc. Compare this scheme with other existing algorithms, in the same network node number case, this scheme achieves lower cost and higher security performance, and is suitable for common security authentication of WSN application.
  • WANG Chen-Guang, JIAO Shu-Shan, HEI Yong
    Computer Engineering. 2013, 39(7): 177-180. https://doi.org/10.3969/j.issn.1000-3428.2013.07.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A basic architecture is proposed for reducing the implementation complexity of SM4 block cipher. The architecture reuses the hardware of encryption/decryption and key expansion module because the encryption/decryption algorithm is very similar with the key expansion algorithm. Optimum trade-off among control-logic complexity, reused-module complexity and throughput is realized through careful analysis and choose of specific realization. A SM4 cipher IP is designed based on this architecture. The designed IP’s cost is only 55% of the traditional design in Field Programmable Gate Array(FPGA). The IP is also synthesized under the SMIC 0.18 μm CMOS process. Its area is 0.079 mm2 with 100 Mb/s throughput. Experimental results of synthesis show that the proposed architecture can reduce the implementation complexity of SM4 block cipher efficiently.
  • SONG Bei, SONG Yu-Rong
    Computer Engineering. 2013, 39(7): 181-184. https://doi.org/10.3969/j.issn.1000-3428.2013.07.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When a disease breaks out in the population, risk awareness and avoidance behavior on virus propagation make the adaptive adjustment as the infection density change in real networks. Considering the changing process of the risk awareness and avoidance behavior on virus propagation, the time-varying rewiring probability is used in the Susceptible-Infected-Susceptible(SIS) network propagation model. The dynamic propagation is focused on in adaptive networks. The results indicate that the faster time-varying rewiring probability increases, the slower virus spreads, and a bigger time-varying rewiring probability stable value results in a smaller propagation size. These findings suggest that in real world, the earlier rewiring strategy is applied, the more effectively it works on the inhibition of virus. The final infection scale decreases as the awareness of virus gets deeper, and avoidance behavior becomes more comprehensive.
  • ZHOU Qing-Lei, LI Bin
    Computer Engineering. 2013, 39(7): 185-188. https://doi.org/10.3969/j.issn.1000-3428.2013.07.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a double software watermark scheme based on tamper-proofing is put forward for improving the low resilience and encoding data rate of software watermark. With the stealth of register allocation to make Improved Color Permutation(ICP) algorithm, combined the high data rate of radix-k encoding with the high resilience of Planted Plane Cubic Tree(PPCT) encoding to make Double circular linked Planted Plane Cubic Tree(DPPCT) mix encoding. After the watermark embedded, it uses checksum mechanism and Advanced Encryption Standard(AES) code encryption to prevent reverse engineering and some other methods to attack the software watermark. Theoretical analysis and experimental result show that this scheme has high stealth, robustness, resilience and data rate.
  • DAN Yong-Fang, DU Xiao-Ni, YAN Tong-Jiang, LI Xu
    Computer Engineering. 2013, 39(7): 189-192,199. https://doi.org/10.3969/j.issn.1000-3428.2013.07.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the construction of generalized cyclotomic sequences, this paper propose a construction method of generalized cyclotomic sequences with period pm of arbitrary order. It uses the theory of polynomial root over finite fields GF(2). All the possible values of linear complexity of sequences are obtained. Results show that the new sequence has larger linear complexity and can resist the attack by B-M algorithm. The sequence generalizes the existed construction, and also revises some incorrect proof in the literature.
  • MENG Ti-Wei, HU Ai-Qun, SONG Yu-Bei, CHEN Chuan-Zheng, BU Ning, GU Xue-Fei
    Computer Engineering. 2013, 39(7): 193-199. https://doi.org/10.3969/j.issn.1000-3428.2013.07.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the analysis of active test, passive test and penetration test, one wireless LAN security test system based on penetration test is proposed in this paper. This system can make security protocol compliance verification and equipment protocol security testing against WEP, WPA, WPA2 and WAPI, and can give the report about safety evaluation. The design and implementation of the system is proposed, and the test results against the wireless equipments are given. The results show that the system can make security protocol compliance verification and penetration test and the test process of the system is automated.
  • LI Jing, LI Lin-Sen
    Computer Engineering. 2013, 39(7): 200-204. https://doi.org/10.3969/j.issn.1000-3428.2013.07.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the principle of Side-channel Analysis(SCA) and power leakage of IC chip, this paper analyzes the relationship of encryption process and power leakage of IC chip. According to the problem of power leakage in chip Data Encryption Stardard(DES) encryption, it brings out power diff-function Differential Power Analysis(DPA) and correlation analysis method, which are based on S-box output. It uses Inspector platform to do DPA examinations, and is succeed in cracking the key of some chip’s DES encryption. Examinations results not only prove the correctness of test method, but also find the power leakage security vulnerability of traditional DES algorithm used in IC chip.
  • TUN Chun-Yang, LI Shun-Dong
    Computer Engineering. 2013, 39(7): 205-208. https://doi.org/10.3969/j.issn.1000-3428.2013.07.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    On the general access structures, this paper presents an ideal secret sharing scheme on a special type of hypergraphs by establishing a one-to-one correspondence between access structures and hypergraphs. The designed distribution algorithm and reconstruction algorithm are mainly based on the thought of vector space construction and (t, t) threshold scheme. Find the longest path in the given acyclic hypergraphs, increase the number of vertex in 2-regions, add the ears onto the hypergraphs, and finish the construction of the scheme. Meanwhile, the information rate of the scheme can reach its maximum 1.
  • SHI Qiang-Min, HU Yao-Hua, HU Yan-Jun, TUN Xiao-Pei
    Computer Engineering. 2013, 39(7): 209-213. https://doi.org/10.3969/j.issn.1000-3428.2013.07.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at several cases existing in the object tracking algorithm, such as failure happens when the object’s appearance changes or when the target and background are similar, an object tracking algorithm based on multistep color accumulation and texture fusion is proposed in this paper. The whole tracking algorithm can be divided into two parts: color and texture feature extracting and target and background’s similarity judging, in the tracking process those two parts run by turns according to different cases. When the object’s appearance changes, the Region of Interest(ROI) frame difference is used to compute the center of the target again, extracts multistep color model and accumulats it to the old one. Experimental results show that the average tracking error of this algorithm is half of the algorithm based on color-texture and one third of single visual feature tracking algorithm.
  • WANG Yi-Xuan, ZHANG Dun-Mei, HAN Jiang-Meng
    Computer Engineering. 2013, 39(7): 214-218,223. https://doi.org/10.3969/j.issn.1000-3428.2013.07.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Autonomous target recognition and positioning issues are the base of intelligent forestry robots. This paper chooses the trunk in forestry environment as the goal, puts forward a digital video real-time processing systems hardware platform based on the binocular vision. The binocular cameras acquire the images. It calculates the three-dimensional information, then outputs the targeting and ranging results. Experimental results show that the hardware platform can complete the images acquirement and processing, and achieves the desired effect.
  • WANG Yong-Meng, ZHANG Yang-Dun, XIE Bin-Gong, BO Li-Hu, CHEN Li-Chao
    Computer Engineering. 2013, 39(7): 219-223. https://doi.org/10.3969/j.issn.1000-3428.2013.07.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low efficiency for semantic Web service discovery mechanism in finding service, this paper proposes a novel method based on fuzzy clustering for optimizing semantic Web service discovery. It adopts the modified Fuzzy C-means(FCM) clustering algorithm to realize the cluster preprocessing of services. When clustering services, it can comprehensively consider the input, output, premise and the effect of service as the clustering parameters. This paper expands existing services matching mechanism. When matching services, it can take four functional parameters of service as its factors for similarity calculation. Experimental results show that under in fuzzy clustering stable conditions, the method of service average recall rate of 79.6%, and the average prospective rate of 85.9%, higher than the clustering process and only using Input/Output(I/O) parameters FCM method of clustering processing.
  • XU Jian-Jiang, CUI Hui-Ji, LI Xiao-Beng, LIANG Chun-Gui, CHEN Sai
    Computer Engineering. 2013, 39(7): 224-227,232. https://doi.org/10.3969/j.issn.1000-3428.2013.07.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the auto evaluation results of thesis paper is not scientific, fair and universal enough currently, this paper builds an intelligent evaluation system combines an expert system with multi-level fuzzy evaluation mechanism. It uses the template and rule library as knowledge representation and also uses the multi-level fuzzy evaluation method to calculate the intermediate data. It draws the final reviews of the paper by the expert system. Experimental result indicates that the success rate of effective evaluation can reach at 96.51% in 6 511 papers in 7 subjects and 52 research directions, and it improves that the system is of universality and impartiality.
  • JIANG Yu-Tong, YANG Jin-Hua, LIU Zhao, ZHANG Li-Juan, JIANG Cheng-Hao
    Computer Engineering. 2013, 39(7): 228-232. https://doi.org/10.3969/j.issn.1000-3428.2013.07.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to realize high precision and long-distance passive ranging, a calibration method of CCD camera parameters and binocular ranging system is put forward. A mechanical device of high precision binocular ranging system based on the binocular ranging model is designed. Next about the camera monocular calibration, this paper gets the principal point coordinates and aspect ratio, proposes a new method of principal point calibration, calibrates focal length, first-order radial distortion coefficient, the rotation matrix and translation vector on the basis of two-step method using the linear measure. Baseline of the high precision binocular ranging system, relative position of two image plane and optical axis angle of two camera in calibration system are calibrated. Experimental results show that the precision of camera calibration is 0.582 6 pixel, calibration precision of binocular ranging system is 0.208 mm, it obtains a better result. The actual measurement accuracy of 300 m target is less than 0.28%, and can satisfy the requirements of high precision of binocular CCD ranging system.
  • TUN Pan-Wen, LI Shi, TIAN Qiang-Heng
    Computer Engineering. 2013, 39(7): 233-236,241. https://doi.org/10.3969/j.issn.1000-3428.2013.07.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Question-answering is an important means to supporting students’ self-reliant learning, and it is the focus and heat issue in current academic study. This paper designs an intelligent question-answering system, which integrates wide spread of network resources by using Mashup technology, which can realize the function of answering some exercises answer, term explanation, and so on. The system is applied in teaching of a course, and result proves that it is not only greatly reduces the workload of teachers’ question-answering, but also meets the personalized demand of college students question-answering, thus improving their learning efficiency.
  • DU Jing, LEI Zhi-Hui, ZHOU Xiang
    Computer Engineering. 2013, 39(7): 237-241. https://doi.org/10.3969/j.issn.1000-3428.2013.07.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional guidance assisted landing systems have some problems, such as expensive price, low accuracy, poor anti-interference and so on, which can’t satisfy the requirement about accuracy, anti-jamming and repeatable landing on ship. In order to resolve this problem, this paper presents a Unmanned Aerial Vehicle(UAV) vision guidance assisted landing system based on infrared detection technique. System hardware includes infrared mark light, high-dynamic cameras and optical filters. System software includes light blobs detection algorithm and the binocular intersection algorithm. Light blobs detection algorithm uses Normalized Negative Laplacian of Gaussian(NNLOG) to detect and tracks centers of the infrared lamps in the images. The binocular intersection algorithm uses two cameras to find target, tracks target in synchronization and binocular fair. Shift calibrate method is used on project. Experimental results show that the measurement precision of this system is less than 5 cm in the last pivotal 200 m away from the ideal landing site. It can satisfy the assisted landing requirement.
  • ZHOU Ben-Da, WANG Xiu-Fa, TAO Hong-Liang
    Computer Engineering. 2013, 39(7): 242-246. https://doi.org/10.3969/j.issn.1000-3428.2013.07.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the analysis of local monotonic of modularity function, this paper designs a Local Search and Mutation(LSM) operator, and proposes an improved Estimation of Distribution Algorithm(EDA) for solving community detection problem. The proposed algorithm is tested on basic network and big scale complex network. Experimental results show that the Q function average values of this algorithm while running 100 in different networks times is better than Girvan-Newman(GN) algorithm, Fast Newman(FN) algorithm and Tasgin Genetic Algorithm(TGA).
  • DAI Yong-Qian, ZHANG Meng-Wu, CHU Qing-Lin, DAI Yong-Xin
    Computer Engineering. 2013, 39(7): 247-251,256. https://doi.org/10.3969/j.issn.1000-3428.2013.07.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Standard Quantum Genetic Algorithm(QGA) is premature convergence to local optima when it is applied to combinatorial optimization. To solve this problem, this paper analyzes the mutation probability distribution of Q-bit by introducing the k bit variation subspace conception and points out the conflict of traditional random mutation mechanism and the QGA self-implied variation mechanism. Based on these analysis, a novel Stage Large-scale Variation Mechanism Based on Observation(SLVMBOO) is proposed. Mutation operator of SLVMBOO which is embedded in the quantum rotation policy table is simple to implement and it is highly efficient. The tests results of different scale of 0/1 knapsack problem show that this mechanism can effectively avoid the premature convergence and successfully jump out of local optima when it is applied to combinatorial optimization. The global optimization ability is superior to the standard QGA.
  • LONG Long, DENG Wei
    Computer Engineering. 2013, 39(7): 252-256. https://doi.org/10.3969/j.issn.1000-3428.2013.07.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    At present, the Web pages have more business in the Internet advertising, the green network system can not filter the site with poor content. In order to solve this problem, this paper proposes a text content extraction algorithm for green network webpage. It uses the Document Object Model(DOM) tree to identify and extract the pages of text content module, uses an optimized content extraction algorithm based on particle swarm weight to score each section of the main content, compares the scores with the unhealthy keywords to identify and filter harmful Web pages. Experimental results show that, after optimized by new algorithm, the accuracy rate of identifying harmful webpage is 86.9%, the recall rate is 95.6%, the F value is 91.02%, and is higher than before optimization.
  • ZHANG Jing-Lei, WANG Yan-Jiao
    Computer Engineering. 2013, 39(7): 257-260,278. https://doi.org/10.3969/j.issn.1000-3428.2013.07.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the high error matching rates of the traditional pixel-based matching algorithm, a stereo matching algorithm based on image region segmentation and Belief Propagation(BP) is proposed. The mean shift algorithm is applied to segment the reference image into regions with homogeneous color, and the initial disparity of each pixel is calculated by means of the adaptive weights approaches. The disparity plane parameters are collected by plane model fitting on each segmented region. The ultimate disparity map is acquired by calculated the regional optimal disparity plane, which uses the improved region-based belief propagation algorithm. Compared with the pixel-based global optimization algorithms such as classical BP and Graph Cut(GC) algorithm, this algorithm can greatly reduce the error matching rates especially in textureless regions and occluded regions.
  • ZHANG Feng, LIU Hong, WANG Ai-Lin
    Computer Engineering. 2013, 39(7): 261-264,283. https://doi.org/10.3969/j.issn.1000-3428.2013.07.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the premature stagnation of original Artificial Bee Colony(ABC) algorithm in evacuation motion simulation, an improved algorithm based on the thought of population dividing is proposed. Multi-species cooperation mechanism is used to extend solution’s diversity in order to prevent sinking into local optimum solutions. 3D simulation system is built based on ACIS/HOOPS to make evacuation simulation and comparison analysis on the proposed algorithm. The algorithm can improve accuracy and convergence speed compared with original algorithm, implement even distribution of population evacuation and improves the efficiency compared with Particle Swarm Optimization(PSO) algorithm according to the result of experiments.
  • WANG Lian-Guo, DAI Yong-Jiang
    Computer Engineering. 2013, 39(7): 265-269,287. https://doi.org/10.3969/j.issn.1000-3428.2013.07.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a Multi-agent Shuffled Frog Leaping Algorithm(MSFLA) by introducing the multi-agent system to the Shuffled Frog Leaping Algorithm(SFLA). This algorithm fixes the agent on grid, with the competition and cooperation with its neighbors, and combining the evolution mechanism of the SFLA. Each agent unceasingly senses local environment, and gradually affects the whole agent grid, so that it enhances its adaptiveness to the environment. The agent also makes self-study by using its knowledge to enhance its adaptiveness to the environment. By the test of high dimension benchmark functions, the results illustrate this algorithm this algorithm can effectively maintain the diversity of the population, increase the precision of optimization, simultaneously, efficiently restrain the prematurity, and has higher optimization performance in the field of high dimension functions optimization.
  • ZHENG Xin-Meng, LIU Ning-Zhong
    Computer Engineering. 2013, 39(7): 270-273. https://doi.org/10.3969/j.issn.1000-3428.2013.07.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the sparse characteristic and maintaining features characteristic, the sparse representation is widely used in image processing. To solve the problem of image denoising in the area of image processing, this paper proposes a new Bayesian denoising model based on image feature sparse representation. The model uses the K-means and Principal Component Analysis(PCA) method to obtain the coefficients of dictionary for sparse representation solutions of image patches. The coefficients solutions are used to train the dictionary with regularized optimization. The alternating minimizations are kept between above two steps until the difference between the image dictionary and the source image dictionary satisfied a convergence criterion. It restores the denoising image under the MAP model with that dictionary. Experimental results show that the higher Peak Signal to Noise Ratio(PSNR) value than the source noised images with the increase of imposed noise into clean images, comparing to the initialization with Discrete Cosine Transform(DCT).
  • LIU Xin-Ni, MA Miao
    Computer Engineering. 2013, 39(7): 274-278. https://doi.org/10.3969/j.issn.1000-3428.2013.07.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of searching multiple thresholds by exhaustive search, a new image multi-threshold segmentation algorithm based on Cuckoo Search(CS) algorithm is proposed in this paper. This algorithm employs Otsu method as the fitness function, and uses the favorable parallel searching performance of CS algorithm to quickly and accurately find the optimal thresholds of the image to be segmented. Experimental results show that CS algorithm outforms Bacterial Foraging(BF) algorithm and Artificial Bee Colony(ABC) algorithm in terms of segmentation speed and segmentation thresholds.
  • SHU Chen-Yang, XIONG Yue-Shan, TAN Ke, BO Xin-Hua
    Computer Engineering. 2013, 39(7): 279-283. https://doi.org/10.3969/j.issn.1000-3428.2013.07.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the general boundary-fill approach cannot work effectively for space surfaces, this paper proposes a seed-fill algorithm which is available for the triangular meshes. By changing the way to determine a seed, it makes the algorithm available for the 3D meshes. This subdivision algorithm is guided by the contour, and filters the subdivision point by some feature of the convex hull. Experimental result shows that this algorithm can do boundary-fill on triangular meshes well, has practical value, satisfies the practical need in the area of both efficiency, and fills effect of algorithm proposed.
  • LI Juan-Juan, LI Xiao-Gong
    Computer Engineering. 2013, 39(7): 284-287. https://doi.org/10.3969/j.issn.1000-3428.2013.07.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To the question of single frame image super-resolution, implement single frame image super-resolution with the prior of training images, this paper proposes a single frame image super-resolution reconstruction method based on clustering. It builds a structural clustering based high-resolution dictionary from a set of high-resolution images, optimizes objective equation by using iterative shrinkage solution to solve the representation coefficient of high-resolution image, reconstructs low-resolution image by exploiting the learned high-resolution dictionary. Experimental results show that compared with Total Variation(TV) method, Softcuts method and Sparse Representation(SR) method, the effect of the single frame image super-resolution reconstruction of this method is better.
  • WANG Xiao-Pan, MA Li, LIU Fu-Jiang
    Computer Engineering. 2013, 39(7): 288-292. https://doi.org/10.3969/j.issn.1000-3428.2013.07.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the classification accuracy of hyperspectral remote sensing image when lack of training data, this paper proposes a Weighted K-nearest Neighbor(WKNN) algorithm based on Linear Neighborhood Propagation(LNP). In order to increase the number of training data and improve the classification accuracy, it obtains the unlabeled datas’ probability for each class by LNP algorithm. By this, it can drop the misclassification risk of LNP. Experimental results show that this algorithm has a better performance than other supervised classification algorithms like K-nearest Neighbor(KNN) algorithm, distance WKNN algorithm, and LNP semi-supervised classification algorithm.
  • SONG Min, SHEN Yan-Chun
    Computer Engineering. 2013, 39(7): 293-297,301. https://doi.org/10.3969/j.issn.1000-3428.2013.07.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because of the accumulation of positioning errors due to the drift of Personal Dead Reckoning(PDR) for indoor localization, this paper puts forward a Micro Electro Mechanical System(MEMS) sensor integration of PDR algorithm. It includes the following three points: It obtains displacement information based on typical pedometer principle and stride length estimate, joins dynamic time window and acceleration threshold in stride calculation algorithm, and gets the angle between the walking direction and magnetic north direction with the magnetometer and gyroscope to correct the positioning accumulative error. It realizes indoor location and navigation. The result proves that the distance errors of indoor localization experiment can be controlled within 5%, and the stride calculation is highly precise, the azimuth algorithm is easy to deal with, and it can rectify error because of long time drift in certain extent.
  • XU Wei-Gong, CHEN Yan
    Computer Engineering. 2013, 39(7): 298-301. https://doi.org/10.3969/j.issn.1000-3428.2013.07.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    At present, in many multi-Agent applications, Agents are mainly deployed on the computers which work at the network. With the higher and higher requirements for mobile applications’ intelligence and initiative, how to deploy Agents into resource-constrained handle devices is becoming one of the most difficult issues of the current researches. To solve this problem, middleware technologies and the idea of splitting-container are used to implement Jade and Android integration. To verify, one Jade Agent is successfully developed for Android platform. The results show that the implementation of lightweight embedded Agent makes the applications of handle devices more convenient, more intelligent, more proactive and more interactive, which can meet the personalized needs of users.
  • DIAO Zhi-Long, ZUO De-Cheng, ZHANG Zhan, JIAN Jun
    Computer Engineering. 2013, 39(7): 302-305,310. https://doi.org/10.3969/j.issn.1000-3428.2013.07.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper researches the fault tolerance ability of I/O subsystem for file system error, and designs a general file system oriented software fault injection tool. The tool realizes file system fault with intercepting and modifying the file operation function jump table in kernel level, simulates a variety of temporary or permanent faults that I/O subsystem may appear. With the performance measurement tools, it compares the performance of I/O subsystem before and after file system fault injection. The results verify the fault injection tool can effectively simulate the file operation abnormal fault.
  • DIAO Shen, JIAO Chun-Cha, ZHANG Chao-Meng, MA Chao
    Computer Engineering. 2013, 39(7): 306-310. https://doi.org/10.3969/j.issn.1000-3428.2013.07.068
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The characteristics of multiple channel and high sample frequency in forward looking sonar demands a heavy computation burden for azimuth and distance information processing, which sets a higher requirement for the real time computation. By the processing units and distributed operation in Field Programmable Gate Array(FPGA), the bottle-neck of speed can be resolved on a large scale. The paper mainly studies the implementation of quadrature demodulation and beam-forming algorithm applied for the original imaging information, based on FPGA. The signal model is established, and the core algorithm is derived. The difficultness in the implementation is presented, and the utilization of FPGA resources is provided. A validation of the processing schemes is conducted with real data in the tank and lake yielding accurate results.
  • HE Jun, TIAN Ceng, GUO Yong, CHEN Cheng
    Computer Engineering. 2013, 39(7): 311-313,317. https://doi.org/10.3969/j.issn.1000-3428.2013.07.069
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the shortcoming that the Fused Multiply-Add(FMA) unit increases the latency of separate floating-point add/subs tract and multiply operations, the effect of FMA unit latency optimization, reducing the latency of separated floating-point add/subtract and multiply operations from 6 cycles to 4 cycles, on floating-point performance is studied. Based on a homemade processor with FMA unit, the RTL design is modified. The effect of the optimization on floating-point performance is estimated after running SPEC CPU2000 floating-point benchmarks on the hardware emulation acceleration platform. As the results turned out that the floating-point performance of the benchmarks is all improved 5.25% at most and 1.61% on average, proving that such optimization in favor of floating-point performance promotion.
  • CENG Chui-Xin, WANG Jia-Dun, SHEN Li-Ping, SHEN Rui-Min
    Computer Engineering. 2013, 39(7): 314-317. https://doi.org/10.3969/j.issn.1000-3428.2013.07.070
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the importance of mobile learning, making use of the high performance, portability and easy access to network of the IOS devices, using some key technologies like HTTP Streaming protocol, XMPP protocol, SJSP encoding, an IOS-based mobile learning platform is designed and implemented. This platform is composed of teacher’s terminal, making terminal, data center and learning center. Actual application result shows that users can watch courses live or on demand to learn through IOS devices. While watching courses live or on demand, users can post questions to interact with teachers and they can learn better.
  • HU Min, WANG Jian, LAI Jin-Mei
    Computer Engineering. 2013, 39(7): 318-320,封底. https://doi.org/10.3969/j.issn.1000-3428.2013.07.071
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at modern mainstream Field Programmable Gate Array(FPGA) with diverse logic blocks and interconnect lines, this paper proposes a universe FPGA architecture description method. Considering the fact that tiles are actually copied and pieced together to form the overall FPGA hardware layout, this paper proposes an FPGA architecture model based on hierarchical tile. According to the model, this paper also defines a set of complete and detailed syntactic rules to describe the FPGA architecture. Experimental results show that the description method can delineate FPGA hardware information, and work correctly with FPGA software system. It has common architecture and is small in size.