Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 November 2013, Volume 39 Issue 11
    

  • Select all
    |
  • ZHOU Ming-hui, HU Shi-qiang, CHEN Si-cong
    Computer Engineering. 2013, 39(11): 1-4. https://doi.org/10.3969/j.issn.1000-3428.2013.11.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that the interested target is disappeared or occluded easily in large range of complex scenes by using the traditional camera, a cylinder unwrapping and real-time target tracking method based on panoramic camera is proposed. In this paper, an improved unwrapping algorithm is adopted to transform the panoramic image from omni-directional image. This algorithm effectively solves the distortion problem of panoramic image, and then CamShift combing with Kalman filter algorithm is used to track the moving target. Experimental results demonstrate that the proposed algorithm realizes a real-time and robustness target tracking under large-scale and complex scenes, which contains moving target occluded, temporary disappearance or interference from objects with same color.

  • LI Zhu-liang, ZHAO Yu-ming
    Computer Engineering. 2013, 39(11): 5-8. https://doi.org/10.3969/j.issn.1000-3428.2013.11.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The popular calibration method is relatively a complex stuff, which obstructs the privilege of calibrated cameras. Based on the lens undistortion method, auxiliary with calibration board information and constrains from vanish points, this paper proposes a camera full calibration method based on single image. The lens distortion parameters are acquired with Levenberg-Marquardt(LM) iteration. Subsequently, camera’s intrinsic parameters are linearly solved. Camera’s extrinsic parameters are gained directly. The proposed method is easy to use for only one image contains a calibration board is needed to reach the full calibration. Experiments are employed to certify that it is robust enough to achieve calibration when the angle between calibration board and view plane is less than 45 degree, meanwhile keeping a low re-projection error which is less than 0.3 pixels.
  • LUO Qi-wu, HE Yi-gang, LI Zhong-qun, ZHENG Jian, YU Wen-xin
    Computer Engineering. 2013, 39(11): 9-13,18. https://doi.org/10.3969/j.issn.1000-3428.2013.11.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper chooses node voltage, characteristics of amplitude-frequency and phase-frequency as fault feature vector set, which can be captured by the multichannel wide-band data acquisition interface equipped with corresponding driver program. Taking advantages of network analysis and information fusion, it proposes the full-digital and orthogonal approaches for detecting magnitude-phase characteristics in real time. The paper develops and implements a modular analog circuit fault diagnostic hardware architecture based on Field Programmable Gate Array(FPGA), solving the problem that most diagnosis algorithms of analog circuits are staying in the simulation stage as lacking of related platforms. Case analysis results show that the system can deal with the tolerance fault efficiently, and its recognizing accuracy achieves high up to 96%, while the time consumption is within 60 ms, although there is some overlapping data when tolerance is considered.
  • YANG Zhong-zhen, WANG Jin-lin, ZHENG Yan-wei, WANG Xian-guan
    Computer Engineering. 2013, 39(11): 14-18. https://doi.org/10.3969/j.issn.1000-3428.2013.11.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the lack of channel bonding mechanism in Ethernet Passive Optical Network(EPON) and Ethernet over COAX(EoC) network, based on Multi-protocol Encapsulation(MPE), this paper proposes a Multiple Channel MPE(MC-MPE), and then designs a multi-channel transmission scheme. By introduction of multi-channel transmission layer between the IP layer and MPE layer, MC-MPE is compatible with existing hardware, and supports for transmitting with priority, reordering packet quickly and detecting packet loss. Analysis result shows that the MC-MPE encapsulation efficiency is slightly higher than Data-Over-Cable Service Interface Specifications(DOCSIS) 3.0. Multi-channel transmission scheme based on MC-MPE can significantly improve user access bandwidth, exceed the level of DOCSIS 3.0, and make networks meet the narrow width asymmetric service needs.
  • XU Wei, WANG Jian, YANG Xin
    Computer Engineering. 2013, 39(11): 19-23,30. https://doi.org/10.3969/j.issn.1000-3428.2013.11.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In the myocardial perfusion image, the location and shape of the heart change with respiration and heartbeat. Therefore, it is necessary to compensate the movement of positions of myocardial in Cardiac Magnetic Resonance(CMR). To address the poor feature problems in medical image, this paper introduces Markov Random Field(MRF) to tackle this problem and to assess the cardiac movement. According to the neighbor information and intensity information of image pixel blocks in the sequence of cardiac cycle images, the motion vectors can be calculated and the most similar pixel blocks are placed to almost the same position to compensate cardiac movement. Due to complexity of the calculation in MRF, some GPU based methods are introduced to improve computing performance of the whole algorithm. Experimental results demonstrate that the method can effectively correct the movement and deformation of the myocardial perfusion image. The calculation performance increases 400%, the calculation time is one third of the CPU based methods after applying GPU.

  • YUAN Shao-feng, WANG Shi-tong
    Computer Engineering. 2013, 39(11): 24-30. https://doi.org/10.3969/j.issn.1000-3428.2013.11.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of the heavy-tailed characteristics in the actual face image, a face recognition method of multi-classification based on mixed Kotz-type distribution is proposed. Mixed Kotz-type distribution and generalized inverse gamma distribution are often used to represent heavy-tailed characteristics. Based on kernel method and probability statistics, this method adjusts the mixed Kotz-type distribution of the parameters to estimate the facial image in the case of heavy-tailed noise tailing. Varying degrees of heavy-tailed noise are added respectively to the ORL face database, Yale face database, Randface(homemade) face database, and a new heavy-tailed noise with varying degrees of face database is formed. Through the verifying of three face database containing different level heavy-tailed noise, results show that the method can estimate the face image trailing feature containing heavy-tailed noise, and has a higher recognition rate.

  • OU Zhi-hui, ZHAO Ya-qun
    Computer Engineering. 2013, 39(11): 31-34,40. https://doi.org/10.3969/j.issn.1000-3428.2013.11.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Trivium is an important international sequence cryptography. Jia Yan-yan(Journal of Electronics & Information Technology, 2011, (6)) attacks 2-round Trivium by utilizing a simple and multiple linear cryptanalysis. In the light of few linear approximation and little deviation in their papers. This paper presents a method to conduct linear cryptanalysis of 2-round Trivium by changing clock number and linear approximations of the first round, and proposes a linear approximation with deviation 2–29 and 8 linear approximations with deviation 2–30. Moreover, utilizing the algorithm of Jia Yan-yan paper attacks 2-round Trivium by simple and multiple linear cryptanalysis. Study result shows that, in order to identify a secret key given, the method can supply the success rate with 1/16 of data amount compared with the foregone data amount required, namely, the number of chosen Initial Vector(IV) are 258 and 257, respectively.
  • GUO Jin-shi, TANG Hong-bo, GE Guo-dong
    Computer Engineering. 2013, 39(11): 35-40. https://doi.org/10.3969/j.issn.1000-3428.2013.11.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that most existing community detecting algorithms are usually based on the structure characteristics of network and lack of consideration attribute information, a community detection algorithm is proposed based on fuzzy equivalence relation combining topology and attribute in social networks. In this algorithm, a new concept of integrated dissimilarity distance index is used for combining topology and attribute, and it is regarded as the subordinate relation to build the fuzzy equivalence relation matrix, appropriate clustering threshold value is choses for community detection. Experimental result proves that the algorithm has high accuracy compared with those traditional GN algorithms, and nodes in the same community are densely connected as well as homogeneous.
  • ZHANG Pei-yun, GONG Xiu-wen
    Computer Engineering. 2013, 39(11): 41-45,51. https://doi.org/10.3969/j.issn.1000-3428.2013.11.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the high time complexity of some algorithms which solve the influence maximization problem, this paper proposes an extended linear threshold propagation model and the probability transfer matrix. The propagation process and rules of the model are proposed. It designs the influence maximization algorithm based on probability transfer matrix and utilizes the greedy method to find the top-k nodes with more influence power. The algorithm computes the probability effect of T instant by probability transfer matrix product. It need not compute the marginal benefit of inactive nodes at each moment. It can improve the efficiency of running in shorter time, and it can maximize the number of influenced nodes and can widen the range of information propagation in large-scale social network. Experimental results demonstrate the effectiveness and efficiency of the approach. The algorithm has wide influence range for social network nodes and has low time complexity in large social network.
  • YUAN Gong-biao, YANG Jin-min, BAI Shu-ren
    Computer Engineering. 2013, 39(11): 46-51. https://doi.org/10.3969/j.issn.1000-3428.2013.11.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing rollback recovery technologies show that their time overheads increase sharply with the scale of nodes due to synchronization constraints and the sequential execution pattern. Aiming at this problem, this paper proposes an implementation method of low overhead rollback recovery based on concurrency exploiting. It uses the strategy of piggybacking dependency on messages to release the message log synchronization constraints. In addition, the workloads in a process is resolved to exploit their concurrency. Then data buffering strategy and multithreading technology are applied to implement the concurrent execution of various process workloads, leading to a low overhead rollback recovery scheme. Experimental results of three NAS NPB2.3 benchmarks show that the overheads of a checkpoint are decreased from 0.63 s, 3.19 s, 1.21 s to 0.18 s, 0.67 s, 0.19 s respectively, and the overhead ratios of message logging are decreased from 13.4%, 3.5%, 18.3% to 0.7%, 0.1%, 1.0% respectively.
  • SUN Dong-pu, HAO Xiao-hong, HAO Zhong-xiao
    Computer Engineering. 2013, 39(11): 52-56. https://doi.org/10.3969/j.issn.1000-3428.2013.11.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The MAH_TPR indexing method is proposed which aims to solve the problem of decreased update performance and query performance because of frequent updates in spatial-temporal database. This method is optimized by prepared processing, indexing structure and update algorithm. Overlapping probability among the spatial areas of nodes is significantly reduced by using the spatial clustering in structuring indexing and updating. The leaf nodes can be accessed directly and further the disk I/O operation is decreased by introducing a disk-based hash auxiliary structure. Node merging and splitting in main indexing structure are avoided by employing a memory-based auxiliary storage structure which is used to store the moving objects that have frequent update requests. Experiments in update and query performances of the method are studied. The results show that the MAH_TPR indexing method has a better query performance than HTPR indexing method and LGU indexing method. Its update performance is better than that of HTPR indexing method.
  • ZENG Qing-hua, YUAN Jia-bin, ZHANG Yun-zhou
    Computer Engineering. 2013, 39(11): 57-60,64. https://doi.org/10.3969/j.issn.1000-3428.2013.11.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There are some disadvantages of mass mail filtering for large mail systems on the traditional distributed system including programming difficulties, low efficiency, mass system and network resources consumed. Taking advantage of the high performance of the cloud computing in processing data processing effectively, a MapReduce model of Bayesian mail filtering based on Hadoop is proposed. It improves the traditional Bayesian filtering algorithms and optimizes the mail training and filtering processes. Experimental results show that, compared with traditional distributed computing model, the Hadoop-based MapReduce model of Bayesian anti-spam mail filtering performs better in recall, precision and accuracy, reduces the cost of mail learning and classifying and improves the system efficiency.
  • FAN Xie-yu, REN Ying-chao, DENG Fu-liang, WANG Qing-gang
    Computer Engineering. 2013, 39(11): 61-64. https://doi.org/10.3969/j.issn.1000-3428.2013.11.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Currently, researches on parallel spatial processing are focusing on data declustering and the corresponding algorithms. Less attention is paid on the spatial parallel database platform, especially, on issues like development mode support and intensive concurrent visits. Therefore, this paper proposes a parallel spatial query language based on proxy mechanism, and implements a prototype platform based on it. This paper develops the standard Web Mapping Service(WMS) based on this platform, and uses WMS to render large scale vector datasets. Experimental result shows that this platform is consistent with same development and application mode of the traditional relational database. It can provide seamless connection, and has high availability and query performance under the condition of the mass data high concurrency.
  • ZHANG Yu, TIAN Wei, ZHONG Zi-fa
    Computer Engineering. 2013, 39(11): 65-67,73. https://doi.org/10.3969/j.issn.1000-3428.2013.11.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the closed-loop transmission mode of the Long Term Evolution(LTE) system download link, this paper focuses on the problem that the calculations and feedback grow much more as the number of sub-carriers grows, and gives a new method to select the precoding matrix based on mean mutual information. By dividing the K sub-carriers into D groups and utilizing the mean value of the channel matrix in the same group, it selects a precoding matrix to maximize the sum-rate on the all K sub-carriers. Simulation results show that the complexity of the proposed method is reduced distinctly in comparison with that of the existing methods, meanwhile the Block Error Rate(BLER) and the system capacity performance keep almost the same.
  • MA Wei-guo, YANG Zhong
    Computer Engineering. 2013, 39(11): 68-73. https://doi.org/10.3969/j.issn.1000-3428.2013.11.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The additive non-fragile state feedback H∞ controller is designed for networked Lipschitz nonlinear system with logarithmic data quantization, networked induced delay shorter than one sampling period and packet dropout governed by Markov chain subject to controller parameter perturbation. The effect of data quantization and network induced delay is converted the parameter uncertainties of the controlled system. The networked nonlinear system is formulated as a Markovian jump system. The sufficient condition of the additive non-fragile state feedback H∞ controller for the networked nonlinear system is derived in terms of linear matrix inequality based on Lyapunov stability theory. The H∞ controller can be obtained by the solution of linear matrix inequalities. Simulation results show that when there exit parameter perturbations in controller, the non-fragile controller can stabilize the controlled system and meet setting H∞performance index compared with the traditional one.
  • SHAO Yong-qing, HUANG Yan, WANG Xiang-yu
    Computer Engineering. 2013, 39(11): 74-77. https://doi.org/10.3969/j.issn.1000-3428.2013.11.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Along with the combination of satellite and Internet technology, business lines transmitted by satellite Digital Video Broadcasting-Satellite(DVB-S) channel increases rapidly. The first generation system can not meet the need of the fast increasing IP business, hence, an entirely new Generic Stream(GS) is introduced in DVB-S2 standard. Aiming at the problem that the existing satellite terminal equipment can not process GS very well, this paper presents a DVB-S2 GS extraction and analysis method according to the form of generic stream, the arrangement method of GS in base-band frame and feature of Generic Stream Encapsulation(GSE). It analyses the encapsulation mode of base-band frame and GSE of GS, and tests some actual data. Experimental result shows the method can restore IP data in application layer rapidly and accurately, and can realize the processing of GS.
  • ZHANG Qing-guo, WANG Jing-hua, ZHANG Wei
    Computer Engineering. 2013, 39(11): 78-82. https://doi.org/10.3969/j.issn.1000-3428.2013.11.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a node localization algorithm for Wireless Sensor Network(WSN) based on Differential Evolution(DE). The proposed algorithm constructs objective function according to the error of estimated distance from measured distance between neighboring nodes, and uses DE algorithm to find optimal solutions to the objective function. When the algorithm finds optimal solutions, it gets the estimated coordinates of unknown nodes. Experimental result shows that the localization accuracy of the proposed algorithm is less than 5% when the percentage of anchor nodes is 10% and the transmission range R of nodes is 1.8r, and the proposed algorithm can achieve higher localization accuracy than Semi-definite Programming(SDP) with gradient search localization algorithm.
  • XIONG Bing, ZHAO Jin-yuan, LIAO Nian-dong
    Computer Engineering. 2013, 39(11): 83-86,90. https://doi.org/10.3969/j.issn.1000-3428.2013.11.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To promote the performance of packet processing and analysis in high-speed network, this paper proposes an efficient connections management method, Establishing Connection Buffering(ECB), in virtue of the characteristics of the three-way handshake in Transmission Control Protocol(TCP). Based on the analysis of TCP connection establishment process in IP network, it presents the design methodology that isolates establishing connections from whole connection table and buffers it separately. As a further step, it classifies packets into several types and gives the implementation of connection management for each type. It evaluates the performance of the ECB connection management with real high-speed network traffic trace. Experimental results indicate that the ECB performs almost 50% better than the traditional one with single connection table in terms of lookup performance.
  • WANG Jing, ZHU Cui-tao
    Computer Engineering. 2013, 39(11): 87-90. https://doi.org/10.3969/j.issn.1000-3428.2013.11.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A novel decision fusion algorithm of reweighted distributed multi-objects is proposed for poor efficiency of information fusion by single sub-band in Cognitive Radio(CR), and the situation cannot realize optimization with fixed weight during the fusion process. The algorithm detects multiple sub-bands at the same time, using adaptive sparse weight matrix to solve the algorithm, and solves the optimization problem by steepest descent method, and combines the information of CR and channels to choose the optimal collaborative users. Experimental results show that the algorithm raises the detective probability and stability in the low SNR condition.
  • HU Wei-ya, LU Jia-liang, WU Min-you
    Computer Engineering. 2013, 39(11): 91-95. https://doi.org/10.3969/j.issn.1000-3428.2013.11.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper introduces a wireless sensor tracking system for personal indoor positioning based on Received Signal Strength Indicator(RSSI) and Dead Reckoning(DR) technology. The system is built with portable on-body sensor nodes and assisted sensor nodes deployed in the targeted indoor area. It takes a hybrid approach with pedestrian dead reckoning and radio-based localization. Real-time inertial measurements are combined with RSSI-based information, and processed with an extended Kalman filtering to be weighted in the location estimation according to their reliability. It incorporates with an adaptive step length algorithm to reduce the deviation of the measurements. Experimental results show that the system can improve the accuracy of the positioning by 66.3% compared with pure inertial dead reckoning system.
  • ZHAO Nan-nan, WANG Cheng, YANG Xue-hui
    Computer Engineering. 2013, 39(11): 96-99. https://doi.org/10.3969/j.issn.1000-3428.2013.11.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering depressed network throughput performance of On-demand Multicast Routing Protocol(ODMRP) with increased load in Ad hoc network, an improved ODMRP is presented. It brings in load balancing algorithm, which accepts or discards a JOIN-TABLE based on the recent load of a node, and a node with light load becomes a transmit node to ease network congestion and make full use of network resource. Simulation results show that the improved ODMRP can increase network throughput and decrease data loss rate under heavy load.
  • JIN Hai-bo, ZHONG Chong-quan
    Computer Engineering. 2013, 39(11): 100-104,108. https://doi.org/10.3969/j.issn.1000-3428.2013.11.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the performance of real-time Ethernet and the transmission efficiency of data frames, an optimal policy for real-time Ethernet transmission is proposed based on stochastic optimization theory. The transmission states of real-time Ethernet are analyzed and transition probability matrix is derived by calculating the transition probabilities among any two different Ethernet states. Subsequently the probability of each Ethernet state is ascertained by solving steady-state equations and then the probabilities that frames retransmit successfully after collision occurred in each time are calculated. The aim of this paper is focused on optimizing the sending rate of nodes by maximizing the objective function of the successful transmission probability. Experimental results show that in the presented policy the maximum amount of improvement in terms of transmission successful rate and throughput of Ethernet is as high as 50.4% and 23.4% respectively in comparison with the existing method, and there is also certain improvement on average time-delay of point-to-point.
  • HE Jing, GUO Jin-li, XU Xue-juan
    Computer Engineering. 2013, 39(11): 105-108. https://doi.org/10.3969/j.issn.1000-3428.2013.11.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The process of public opinion propagation on the micro-blog can be abstracted as a growing complex network. Based on the analysis of the micro-blog network characteristics and users’ behavior habit, considering the assortative mixing for new user access to the network, this paper establishes an evolution model of the micro-blog relationship network and makes a model simulation. The analysis result confirms that the micro-blog network shows as exponential and power-law mixture distribution. Empirical analysis on the micro-blog user data confirms that the micro-blog network node degree distribution shows as exponential truncated power-law distribution has scale-free and small-world characteristics. It is consistent with the results of theoretical analysis.
  • LIU A-na, DONG Shu-fu, HU Xi-ming
    Computer Engineering. 2013, 39(11): 109-113,118. https://doi.org/10.3969/j.issn.1000-3428.2013.11.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Fixed physical carrier sensing threshold is set before Wireless Sensor Network(WSN) nodes are deployed under the environment in actual using, which is not guaranteed that the fixed Physical Carrier Sensing(PCS) threshold set beforehand can work well under different environments noise. Thus, the effects on WSN performances by PCS threshold are analyzed and the problem of configuring an energy-efficient PCS threshold is equivalent to the programming of the optimized PCS to minimize both the conflicts probability and throughput loss, and a strategy for energy-efficient PCS threshold self-configuring based on noise is proposed. Simulation results show that, the throughput and energy-efficiency of the Energy-efficient PCS(EPCS) threshold are reduceding 17.9% and 34.1% respectively when the Gaussian noise with variance being 0.01 changes to the Gaussian noise with variance being 0.2, which are better than the performances of the bound PCS thresholds.
  • YANG Jian-bo, JI Xin-sheng
    Computer Engineering. 2013, 39(11): 114-118. https://doi.org/10.3969/j.issn.1000-3428.2013.11.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of independent handover latency, this paper proposes a handover mechanism based on enhance information service. By the function expansion of the information services entity, it defines enhanced information content and update mechanism, and according to the proposed channel parameter estimation algorithm, selects the target access point quickly and accurately. Simulation results show that, with selecting reasonable distance threshold, mobile node can achieve high handover hit rate by more than 90% without scanning candidate target networks.
  • ZHANG Zhi-wei, ZHANG Chuan-fu, YUE Yun-tian
    Computer Engineering. 2013, 39(11): 119-122. https://doi.org/10.3969/j.issn.1000-3428.2013.11.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the improvement of the strategic position of cyberspace, the technology of network countermeasure becomes a research focus in cyberspace. In order to overcome the problem of building the coutermeasure model in cyberspce, this paper takes worm as research object and researches the attack-defense countermeasure technology based on building the worm propagation model and the worm defense model. Firstly, a worm propagation model based on the strategy of selective-random scan is built by the analysis of the scanning strategy and the detection method. Then, at the basis of the worm propagation model, the defense method of Internet worm and improvement measures is proposed with the worm signature. In the end, this paper builds a comprehensive-countermeasure model. Simulation experimental result shows that the method of comprehensive defense can inhibit worm propagation more effectively than the method of the worm signature technology.
  • LIU Tao, XIONG Yan, HUANG Wen-chao, LU Qi-wei, GONG Xu-dong
    Computer Engineering. 2013, 39(11): 123-126. https://doi.org/10.3969/j.issn.1000-3428.2013.11.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Focusing on the study of the security in Wireless Sensor Network(WSN), combined Elliptic Curve Cryptosystem(ECC), this paper presents a key management scheme based on reputation model. It provides reputation model based on Beta distribution. A single point of collapse problem in group key-management is avoided due to distributed key establishment, to update among the nodes and support node mobility and dynamic key management. Analysis results show that, compared with E-G and IBC scheme, this scheme not only can resist the attacks from outside nodes, but also can prevent attacks or malicious behaviors from internal nodes. The performances of WSN security, invulnerability, connection probability of nodes, less storage and communication overhead, can be substantially improved by using the scheme.
  • WANG Jin-ling, GAO Peng-ge
    Computer Engineering. 2013, 39(11): 127-130,135. https://doi.org/10.3969/j.issn.1000-3428.2013.11.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to obtain key stream sequence with better pseudo random characteristics, this paper designs a new type of self shrinkage controlled generator fused on shrinking generator and clock controlled generator. It uses two n level mm-sequence on GF(3) to construct the self-shringking controlled sequence, and the sequence is a balance sequence with 3n+1 cycle. Sequences number range of 1 long 1, 1 long 2 and 1 long 0 run-length are calculated by using the method of classification discussion. It also extends and calculates 2 long 1, k long 1 and k long 2 run-length. Analysis results show that the sequence obtained by the self-shringking controlled generator has the characteristics of high balance cycle, long cycle length and more short run-length. It can meet the application requirements of stream cipher sequence.
  • LIU Ming-zhen
    Computer Engineering. 2013, 39(11): 131-135. https://doi.org/10.3969/j.issn.1000-3428.2013.11.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the network intrusion detection effect, this paper puts forward a network intrusion detection model based on Chaotic Particle Swarm Optimization(CPSO) algorithm and Least Squares Support Vector Machine(LSSVM). The network features and parameters of LSSVM are encoded into binary particles. The objective function of particle swarm optimization algorithm is built based on network intrusion detection accuracy and the dimensions of the feature subset. The particle swarm is used to find the optimal feature subset and LSSVM parameters, while the chaotic mechanism is introduced to guarantee the diversity of particle swarm and to prevent producing precocious phenomenon, the optimal model of the network intrusion detection is established. The performance of proposed model is test by KDD99 data and the simulation results show that proposed model can select the optimal feature subset and LSSVM parameters, the detecting speed and network intrusion detection accuracy are improved, and thereby network intrusion detection false negative rate and false positive rate are reduced.
  • FANG Xian-jin, CAI Miao-qi
    Computer Engineering. 2013, 39(11): 136-138,142. https://doi.org/10.3969/j.issn.1000-3428.2013.11.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As one of the solutions to intrusion detection problem, Artificial Immune System(AIS) shows their advantages, and develops rapidly. The aim of this paper is to further know the recent advances in AIS-based intrusion detection for Intrusion Detection System(IDS) practitioners. Some of the commonly intrusion detection problem used the first and the second generation AIS paradigms are reviewed and the characteristics of each particular algorithm are demonstrated. It is shown that the Dendritic Cells Algorithm(DCA) is demonstrated the potential as a suitable candidate for intrusion detection problems. Consequently, the future works for DCA are proposed, including the formal description for the algorithm, an online analysis component to DCA based on segmentation and the automated data preprocessing for DCA input data.
  • XU Xin-hui, PAN Chao
    Computer Engineering. 2013, 39(11): 139-142. https://doi.org/10.3969/j.issn.1000-3428.2013.11.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Privacy-preserving under set-valued data publishing is an important problem. Aiming at this problem, this paper presents an iterative strategy that anonymizes set-valued data through partial deletion strategy. This strategy ensures that no strong inferences of sensitive information are possible regardless of the amount of background knowledge the attacker possesses, while making no particular assumption of the downstream utility of the data. It attempts to retain as many mineable useful association rules as possible in the anonymized data, while minimizing the item deletions. Experimental result shows that partial deletion significantly outperforms generalization and global deletion, two of the existing popular anonymization techniques, reducing the number of deletions by 30% on average and retaining 25% more rules.
  • MA Jun, LENG Hua
    Computer Engineering. 2013, 39(11): 143-146,157. https://doi.org/10.3969/j.issn.1000-3428.2013.11.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is found that there exists some cyber security risks like impersonation attack and data tampering in the communication process of Distribution Automation System(DAS) based on General Packet Radio Service(GPRS). In order to realize session key establishment for DAS encryption server and any wireless terminal, this paper presents a mutual authenticated key agreement protocol based on bilinear pairings for GPRS communication network in DAS. It analyzes the features of communication network architecture based on GPRS in DAS, the corresponding cyber security risks and security requirements, shows the implementation procedure of the protocol. The protocol includes HMAC algorithm and considers resource-constraint wireless terminals. Security analysis proves that the scheme can resist outsider attack, replay attack and impersonation attack without key trusteeship problem. Compared with the related works, the proposed protocol is more secure and practical, which can satisfy the application requirement.
  • BAI Jian, LIU Nian, LI Zi-chen, LIU Hui
    Computer Engineering. 2013, 39(11): 147-149,162. https://doi.org/10.3969/j.issn.1000-3428.2013.11.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Lattice is a regular alignment of points in multi-dimensional. There are many people researching the public-key cryptosystem based on lattice recently. This paper introduces the basic knowledge of reduced basis of the lattice and analyzes the Gauss algorithm and LLL algorithm. On this basis, it presents the Gauss-LLL algorithm. It proves that algorithm’s validity and gives its realization pseudo code. Gauss-LLL algorithm can reduce arbitrarily set of base for lattice, and eventually get a shorter length lattice. Analysis result shows that Gauss-LLL algorithm not only can get a better reduced basis of the lattice, but also can be faster than the LLL algorithm.
  • XU Fu
    Computer Engineering. 2013, 39(11): 150-153,168. https://doi.org/10.3969/j.issn.1000-3428.2013.11.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In noninterfernce theorem-based trusted model, processes are not allowed to be changed when the system is running, which restricts the application of trusted computing platform. To solve this problem, intransitive noninterfernce theorem is extended to support security domain modification. On this basis, a new trusted model based on intransitive noninterfernce theorem and supporting process codes modification is proposed. On the new theorem framework, the conditions of processes running trust are given and the theorem of processes running trust is proved. Analysis result shows that, compared with present trusted models, this model can both guarantee process running trusted and support process codes modification, which enhances the practicability of trusted computing platform.
  • LIU Hai-long, ZHANG Feng-bin, XI Liang
    Computer Engineering. 2013, 39(11): 154-157. https://doi.org/10.3969/j.issn.1000-3428.2013.11.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to avoid lots of holes among mature immune detectors in intrusion detection, analyzing the relationship with number of detector and detection performance, an immune detector distribution optimization algorithm based on co-evolution is proposed in this paper. It divides the detectors into different subsets, looks for the best individual of each subset, optimizes the subsets by using the interaction between different subsets, and forms a complete detector set by taking union set. Experimental results demonstrate that this algorithm not only can decrease the holes, but also can achieve a more precise coverage of the nonself space with fewer detectors, and can increase the detector performance.
  • LIU San-ya, TIE Lu, LIU Zhi, SUN Jian-wen
    Computer Engineering. 2013, 39(11): 158-162. https://doi.org/10.3969/j.issn.1000-3428.2013.11.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve Chinese written grain identification problem, this paper proposes a written grain identification method based on Multiple Probabilistic Reasoning Model(MPRM), from the point of view of ensemble learning. In this method, diverse subspaces are constructed by dividing the initial sample space into equal granularity, cross-allowed subsets. And then sample is trained by a base classifier based on Probabilistic Reasoning Model(PRM) in each subspace. A probability summation method is used to fuse the output of base classifier to get the final recognition result of training samples. Experimental result shows that this method is effective for online written grain identification. The recall rate, precision rate and F1-measure are 81.6%, 85.9% and 83.69%.
  • WEI Kun, LIU Mi-ge
    Computer Engineering. 2013, 39(11): 163-168. https://doi.org/10.3969/j.issn.1000-3428.2013.11.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the Quadratic Correlation Filter(QCF) associated with kernel space is applied to infrared target detection, this paper proposes KSSQSDF kernel direct mapping algorithm and MPKPCA-SSQSDF kernel feature extraction fusion algorithm. KSSQSDF directly extends QCF from low dimensional space to high dimensional space, thus QCF is transformed to nonlinear correlation filter in kernel space. MPKPCA-SSQSDF first extracts target feature under kernel space, and then the extracted feature vector is used to QCF of low dimensional space for infrared target detection. Through experiment, the difference of detection result and computational complexity are analytically given when KSSQSDF and MPKPCA-SSQSDF are used respectively. The result shows kernel direct mapping algorithm and kernel feature extraction fusion algorithm have the similar detection accuracy, which evidently exceed QCF of low dimensional space. But MPKPCA-SSQSDF kernel feature extraction fusion algorithm does not confine the type of QCF, and has shorter detection time. So it has more extensive application range, and to some extent it can substitute for KSSQSDF kernel direct mapping algorithm.
  • GUO Li, ZHENG Zhong-long, JIA Jiong, ZHANG Hai-xin, FU Fang-mei
    Computer Engineering. 2013, 39(11): 169-173. https://doi.org/10.3969/j.issn.1000-3428.2013.11.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because Locality Preserving Projection(LPP) ignores the label information of the data and it is lack of robustness, this paper proposes a Linear Discriminant Projection(LDP) algorithm. By introducing between-class weight matrix and within-class weight matrix, LDP maximizes the separability of different submanifolds and minimizes the compactness of local submanifolds. Moreover, LDP is robust to outlier data by a robust within-class processing way. Compared with PCA, LDA, LPP, LSDA, LPDP, the experimental results on ORL, AR and Extended Yale B face databases show that the best average recognition rates of LDP are higher, which can reach 95.3%, 93.64% and 96.28%, and this verifies the efficiency of the proposed algorithm.
  • YANG Ping-lv, ZHOU Ze-ming, SHI Han-qing, HUANG Feng
    Computer Engineering. 2013, 39(11): 174-177. https://doi.org/10.3969/j.issn.1000-3428.2013.11.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the registration problem of Geometric Active Contour(GAC) model with prior shape, this paper proposes a prior shape registration algorithm based on variation method and maximum mutual information criterion. After calculating the affine transform parameters with the variation method, the results are used as the initial values of Powell optimization algorithm to maximize the mutual information between the reference and floating images. Experimental results demonstrate that the proposed algorithm can improve the computational efficiency while maintaining a high registration precision.
  • LIAO Ping, SHEN Jia-jie, WU Ping
    Computer Engineering. 2013, 39(11): 178-182. https://doi.org/10.3969/j.issn.1000-3428.2013.11.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Genetic Algorithm(GA) is used as a machine learning tool for designing linguistic rule based on classification systems, accurate description of the category is category is generated by the training data set. So far there is no a uniform standard for the problem of the encoding of GA, this paper researches on the relationship between the individual characteristics coding length, the classification accuracy and the efficiency of classifier. It analyzes the effect of the coding length for classifier classification by probabilistic approximation, uses the method of getting the number of iteration steps mathematical expectation which is used to calculate the GA of classification efficiency. Experimental result shows that the longer encoding length is, the higher accuracy and slower convergence rate are for GA under the condition of Michigan coding.
  • JI Huai-meng
    Computer Engineering. 2013, 39(11): 183-186. https://doi.org/10.3969/j.issn.1000-3428.2013.11.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As Apriori algorithm used for mining association rules can lead to a large number of candidate itemsets and huge computations, an improved Apriori algorithm based on frequency 2-item set support matrix is proposed. By analyzing the generation mechanism of frequent k+1 item sets, the improved algorithm combines assistant matrix and frequent 2-item matrix to realize rapid purning, it can trim infrequent item set quickly and reduce the amount of calculation of k frequent item set verification. Experimental result shows that frequent itemsets mining efficiency of improved algorithm increases significantly compared with Apriori algorithm and ABTM algorithm.
  • GUAN Jian, HAN Fei, YANG Shan-xiu
    Computer Engineering. 2013, 39(11): 187-190. https://doi.org/10.3969/j.issn.1000-3428.2013.11.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a new hybrid feature selection method is proposed to select informative genes with little redundancy and high classification accuracy. Most redundant genes are weeded out by J-divergence entropy between different categories to form the pool of genes, then within which Particle Swarm Optimization(PSO) algorithm is used as the optimal search algorithm to combine J-divergence entropy and classification rate together to select the optimal gene subset. The algorithm is tested on two common used microarray-data, which show the proposed method selects less redundant genes with more interpretability as well as increases prediction accuracy.
  • ZHANG Mei, JIAO Wei, WANG Zeng-fu
    Computer Engineering. 2013, 39(11): 191-196. https://doi.org/10.3969/j.issn.1000-3428.2013.11.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the surface ship detection for Over-the-horizon Radar(OTHR), a Track-before-detect(TBD) algorithm based on adaptive pre-whitening processing is proposed in this paper. The sea clutter is assumed as an Autoregressive(AR) process and the estimated AR model parameters are used to construct the whitener for filtering the sea clutter. The recursive Bayesian estimation is used to estimate the state of the target in track stage. The output values of the tracking filter are used to approximately construct the generalized likelihood ratio for hypothesis test in detection stage. Simulation results with different Signal-to-noise Ratio(SNR) show that the clutter can be effectively suppressed and a target with a low SNR can be detected.
  • DU Xiao-qing, YU Feng-qin
    Computer Engineering. 2013, 39(11): 197-199,204. https://doi.org/10.3969/j.issn.1000-3428.2013.11.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The fusion algorithm of Mel Frequency Cepstral Coefficient(MFCC) and Linear Prediction Cepstrum Coeficient(LPCC) can only react the static characteristics of the speech and LPCC can not describe the local characteristics of the speech low frequency well. So the fusion of Hilbert-Huang Transform(HHT) cepstrum coefficient and Relative Spectra-Perception Linear Prediction Cepstrum Coefficient(RASTA-PLPCC) is proposed, getting a new speaker recognition algorithm that reflects both vocal mechanism and human ear perceptual characteristics. The HHT cepstrum coefficient reflects the human vocal mechanism, and it can reflect the dynamic characteristics of the speech, as well as better describe the local characteristics of the speech low frequency. PLPCC reflects the human ear perceptual characteristics, whose identification performance is better than the MFCC. Two features are combined with the three fusion algorithms, and the fusion feature is sent into the Gaussian mixture model to do speaker recognition. Simulation results demonstrate that compared with the fusion of LPCC and MFCC, the fusion algorithm gets higher recognition rate, and recognition rate is increased by 8.0%.

  • XIE Yue-shan, FAN Xiao-ping, LIAO Zhi-fang, ZHOU Guo-en, LIU Shi-jie
    Computer Engineering. 2013, 39(11): 200-204. https://doi.org/10.3969/j.issn.1000-3428.2013.11.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the result of outlier detection algorithm based on clustering is coarser and not very accurate, this paper proposes an outlier detection algorithm based on Approximate Outlier Factor(AOF). This algorithm presents the definition of the similarity distance and outlier similarity coefficient, and provides a pruning strategy based on similarity distance to reduce the suspect candidate sets to decrease the computational complexity. Experiments are carried out with public datasets Iris, Labor and Segment-test, and results show that the performance of detecting outlier and reducing candidate set of this algorithm is effective compared with the classical outlier detection algorithm.
  • ZHOU Chen-long, HU Fu-qiao
    Computer Engineering. 2013, 39(11): 205-208. https://doi.org/10.3969/j.issn.1000-3428.2013.11.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is a very important but also a tough issue to matching objects in the field of pattern recognition as objects may be influenced by themselves such as scale, rotation, obstacle and also intensity change in the process of recognition. In allusion to this problem, an object matching method based on Harris algorithm and geometric Hash algorithm is proposed. The extraction of interested point features combined with the structure information of geometric Hash algorithm is used. Experimental results show that not only it is possible for this method to match complex objects, but also the accuracy and speed are increased.
  • CUI Kai, LING Xing-hong, YAO Wang-shu, FU Yu-chen
    Computer Engineering. 2013, 39(11): 209-213. https://doi.org/10.3969/j.issn.1000-3428.2013.11.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Probabilistic model is a valid tool to solve the problem of uncertainty inference and data analysis. An improved algorithm based on Markov network is proposed, which focuses on the uncertainty of ontology matching. The similarity matrix is computed using several traditional algorithms, then the similarity propagation rule is improved, and two structure stability constraint rules and one Disjoint coherence constraint rule are added. The corresponding clique potentials are defined. According to the similarity matrix and these rules, a method to construct the Markov network is proposed. The results of ontology matching are obtained from the posterior probability, which is computed by doing approximate reasoning of the Loopy Belief Propagation(LBP) algorithm. Experimental results on OAEI 2010 show that the algorithm can reduce the complexity of probabilistic model effectively compared with iMatch ontology matching system, meanwhile such various clique rules and the corresponding potentials can increase the precision and the recall rate.

  • XIANG Yao-jie, YANG Jun-an, LI Jin-hui, LU Jun
    Computer Engineering. 2013, 39(11): 214-217,222. https://doi.org/10.3969/j.issn.1000-3428.2013.11.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Mel-frequency Cepstral Coefficient(MFCC) focuses on extracting information in the lower frequency of speech signal, and fails to describe the distribution of a speech spectrum sufficiently, so it cannot effectively distinguish speaker’s specific information. By analyzing the distribution of speaker specific information in different frequency bands of the speech signal, different characters of mel-filterbank and inverted mel-filterbank are combined in high and low frequency bands, and an improved filterbank is presented, which is more suitable for speaker recognition. Experimental results show that features are extracted using the improved filterbank achieve better recognition rates compared with the traditional MFCC and Inverted MFCC, and without increasing the computing time obviously.
  • MAO Jun-jun, YAO Deng-bao, LIU Er-bao, WANG Cui-cui
    Computer Engineering. 2013, 39(11): 218-222. https://doi.org/10.3969/j.issn.1000-3428.2013.11.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Focusing on combining effects of uncertain information with the fuzziness and intuitionism and the continuous change of intuitionistic fuzzy entropy, this paper presents two geometry construction methods for intuitionistic fuzzy entropy. The classical construction criterion of intuitionistic fuzzy entropy is refined. By use of geometry methods, two new intuitionistic fuzzy entropies are presented through establishing isentropic plane and isentropic circle and distance, respectively, and relative properties are also proved. At the same time, the general construction method of intuitionistic fuzzy entropy is discussed, and compared with other existed entropy formulas by example. The reasonability and correction of the proposed methods are verified by applying the intuitionistic fuzzy entropy in multi-attributes decision-making problem.
  • HAN Jun-ying, LIU Cheng-zhong
    Computer Engineering. 2013, 39(11): 223-225,239. https://doi.org/10.3969/j.issn.1000-3428.2013.11.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the premature convergence problem of Fruit fly Optimization Algorithm(FOA), a new collaborative learning FOA based on the best and the worst individual is presented. The evolutionary equation is optimized by adding learning the worst individual to it. The ability of the algorithm to break away from the local optimum and to find the global optimum is greatly enhanced. Experimental results show that the new algorithm has the advantages of better global search ability, speeder convergence and more precise convergence.
  • SHI Jian-ting, HUANG Jian-hua, ZHANG Ying-tao, TANG Xiang-long
    Computer Engineering. 2013, 39(11): 226-229,244. https://doi.org/10.3969/j.issn.1000-3428.2013.11.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to calibrate the baseline shift of ECG signal, the combination methods of wavelet transform and adaptive filtering are proposed. The wavelet transform method is used to decompose the original ECG signal and the high-frequency components are used as reference input data. A new adaptive filtering algorithm, P-LMS based on the power function is proposed to conduct adaptive noise filtering. Compared with the traditional Normalized Least Mean Square(NLMS) algorithm, the proposed algorithm is precise. Using the simulated experiment and actual data in the MIT-BIH database, the method of combining P-LMS and wavelet transform is verified that can effectively correct the baseline shift and maintain the geometric characteristics of the ECG signal.
  • ZHANG Tao, LIANG De-qun, WANG Xin-nian, LIU Li-juan
    Computer Engineering. 2013, 39(11): 230-234. https://doi.org/10.3969/j.issn.1000-3428.2013.11.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Theoretical and experimental studies have shown that the 2D tensor of an image can characterize the local structure and there is a high difference between a sharp image and its blurred version. Based on these studies, a novel 3D structure tensor representation of a single image is proposed, whose eigenvalues can characterize the local geometry structure and have a close relationship with image quality, and then a new blur degree assessment method is proposed. The flow chart of the proposed method blurs the input image with a weak and strong low-pass filters firstly, and divides the input image into no overlapped patches, then computes eigenvalues of the 3D structure tensor and blur measuring parameter of each block, finally computes the blur degree of an image by the exponential function of the blur measuring parameters and visual attention weights of each patch. Experimental results show that the proposed method is monotonic, robust to additive noises, and also consistent with human visual system.
  • CHEN Kun, LIU Xin-guo
    Computer Engineering. 2013, 39(11): 235-239. https://doi.org/10.3969/j.issn.1000-3428.2013.11.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a multi-view 3D reconstruction method based on rays. It generates the bounding volume for the object in the images using silhouettes, and discretizes space of the object into voxels. For each pixel in the image, a ray is generated from the center of the camera. To determine the voxels for the rays, Normalized Cross-correlation(NCC) value is used to measure the consistence between the rays and the voxels, where the patch normal is estimated to improve the confidence of the corresponding NCC value. A global optimization model based on factor graph is designed to extract the voxels belonging to the reconstructed object. In addition, an efficient belief propagation algorithm is proposed based on the characteristics of ray factor, which reduces the computational complexity from an exponential one to a linear one. Experimental results show that the proposed method is more robust than the previous reconstruction method based on Markov Random Field(MRF), and achieves better reconstruction result in terms of both accuracy and completeness.
  • LI Qian-qian, CAO Guo
    Computer Engineering. 2013, 39(11): 240-244. https://doi.org/10.3969/j.issn.1000-3428.2013.11.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of image classification with a complex background, this paper proposes a new image classification algorithm based on Laplacian regularization Non-negative Sparse Coding(LNNSC) algorithm. It is combined with the respective advantages of the Non-negative Sparse Coding(NNSC) and Locality Preserving Projecting(LPP) algorithm, compared with the sparse coding algorithm, the proposed algorithm not only can better simulate the main visual cortex V1 simple cell receptive field behavior of the mammal primary visual system, but also can make the non-negative sparse codes of them be similar to each other. This paper combines the proposed LNNSC with Spatial Pyramid Matching(SPM) model and further applies it to image classification. Experimental results verify that the proposed algorithm can achieve higher accuracy.
  • XU Bin, LI Zhong-ke
    Computer Engineering. 2013, 39(11): 245-248. https://doi.org/10.3969/j.issn.1000-3428.2013.11.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To the problem that the optimization algorithm to triangle quality of mesh mode can not preserve geometry detail very well, this paper introduces an algorithm for global optimization of triangular meshes which is guided by the vertex Laplacians. In term of geometric detail describation, vertex Laplacian is used, on the condiation that topology structure is not changed. Lapacian can describe geomatic features of mesh surface accurately. In terms of vetex relocation, new position of vertexes based on optimum relation of linear system that approximates prescribed Laplacians and positions in a weighted least-squares sense will be computed. The result of experiment shows that the technique successfully improves the quality of the triangle patch while remaining faithful to the original surface geometry.
  • YAN Jun-hua, HANG Yi-qing, SUN Si-jia
    Computer Engineering. 2013, 39(11): 249-253. https://doi.org/10.3969/j.issn.1000-3428.2013.11.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Using powerful parallel processing capability of Compute Unified Device Architecture(CUDA) to realize quick image fusion is mainly studied. The image fusion algorithms which have good effect and suit for parallel computing are researched, which are consist of Gaussian filtering, histogram equalization, image fusion algorithm based on wavelet transform, etc. The above algorithms are realized by CUDA, and are compared with corresponding CPU programs. Experimental result shows that using CUDA to realize image fusion is quicker than CPU by an order of magnitude. With the increasing of data, GPU’s speed-up ratio will also be increased.
  • SHANG Zhao-wei, HU De-heng, ZHAO Heng-jun, YANG Jun
    Computer Engineering. 2013, 39(11): 254-258,263. https://doi.org/10.3969/j.issn.1000-3428.2013.11.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To reduce the influences imposed by changing illumination on texture images, a self-adaptive scheme is proposed to extract illumination invariant feature for texture images. Wavelet transform is utilized to extract both high and low components of the logarithmic images. Different processing strategies are each applied to each component to gain the illumination invariant image. Primary component analysis is adopted to obtain the illumination invariant feature. And a K-nearest feature line classifier is employed for classifying. Experimental results show that on Outex 14 texture dataset, the performance of the method is better than that of existing methods, from 5.56% to 22.10% higher respectively.
  • SHI Hua-liang, LI Wei-guo, JIN Fu-guo
    Computer Engineering. 2013, 39(11): 259-263. https://doi.org/10.3969/j.issn.1000-3428.2013.11.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to segment images with intensity inhomogeneity, a new region-based active contour model is proposed. By introducing the Local Image Fitting Bias(LIFB) energy function that embeds the image local information, it can measure the difference between the original image and the fitted image. Moreover, based on the globally convex segmentation method, the split Bregman technique is applied to minimize the proposed energy function more efficiently. By using an edge detection function to the proposed model, the algorithm can detect the boundaries more accurately. Experimental results show that the proposed model not only can segment images with intensity inhomogeneity and images corrupted by noise, but also can efficiently and accurately segment multi-object images and images with similar intensity means but different variances.
  • WANG Qian, ZHANG Ji, GAO Yuan-jun
    Computer Engineering. 2013, 39(11): 264-267. https://doi.org/10.3969/j.issn.1000-3428.2013.11.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Remote display is one of the key technologies of application virtualization system that will directly affect the system performance and user experience. In the present application virtualization system, remote display mostly uses image transmission protocol. This will produce a large amount of network traffic. To solve this problem, this paper proposes a component-based approach for application virtualization. Create the component tree model for source application’s interface, then resolve this model and create a Web component model which is isomorphic to it, so that the source application’s interface is isomorphic to Web interface showing in the virtual terminal. Sending user’s peripheral operations in virtual terminal to source terminal to simulation execute and changes in source application interface send to virtual terminal using component as data element to ensure virtual terminal is synchronous with source terminal. Experiment contrasts component-based virtualization model with image-based virtualization model, and the result shows that component-based application virtualization approach is indeed able to produce less network traffic.
  • ZHOU Yi-min, SHEN Yun-long, CAO Li-dong
    Computer Engineering. 2013, 39(11): 268-271. https://doi.org/10.3969/j.issn.1000-3428.2013.11.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Heterogeneous multi-core is the trend of the development of embedded processors architecture, for its enormous advantage on the processing of complex video coding and decoding computation. The issue of energy consumption also becomes a bottleneck that cannot be ignored. This paper proposes a Dynamic Voltage and Frequency Scaling(DVFS) algorithm based on H.264, which reduces energy consumption by predicting the workload of the frame decoding and scaling the processor’s voltage and frequency. Experimental results indicate that the approach can reduce about 20% of the energy consumption at least.
  • ZHAO Yong-sheng
    Computer Engineering. 2013, 39(11): 272-275. https://doi.org/10.3969/j.issn.1000-3428.2013.11.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Slow response rate and high false alarm rate are two problems in public sentiment monitoring system centers. In order to deal with them, a novel system is presented based on the micro-formats. And its distributed organization model, data structure and work flows are given as following. Power spectrum estimation detection algorithms and micro-formats selection methods are utilized to monitor public sentiment hotspots. And a novel predication model can get their trends to select beforehand cases. Simulation experimental results show that the system has higher public sentiment precision and efficiency compared with SmartC system.
  • YIN Zong-run, LI Jun-shan, SU Dong, SUN Yan-xin
    Computer Engineering. 2013, 39(11): 276-279,284. https://doi.org/10.3969/j.issn.1000-3428.2013.11.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the difficulty in reliability assessment of complex system, a model based on Bayes-GO method is proposed. Bayes method is adopted for multi-source information fusion to build the component reliability model, and GO method is utilized to integrate the component reliability parameters and form the reliability model of the system. An instance of reliability assessment for complex electronic equipment is given to show the effectiveness of the model. Result shows that, this method takes advantage of Bayes method and GO method, and it provides useful reference for relative applications.
  • XU Liang
    Computer Engineering. 2013, 39(11): 280-284. https://doi.org/10.3969/j.issn.1000-3428.2013.11.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Remote control system can be used by people to access the non-native systems, such as the various hardware and software resources provided in another platform. In order to implement the remote control service on a mobile platform, a remote access and control system is implemented based on Android system and its application-specification, is implemented through the Java language, followed with the RFB protocol. Also, touch screen, soft keyboard, multi-point touch are implemented in this system by using the system resources of Android system. It can be run on the mobile-phone with Android system and is able to communicate with the VNC software family servers which support RFB protocol.
  • HU Jia-yi, ZHANG Ji, LIU Ling
    Computer Engineering. 2013, 39(11): 285-288,294. https://doi.org/10.3969/j.issn.1000-3428.2013.11.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The development of embedded systems is facing new trends including variousness of usage scenarios, strict requirement for real-time feature, complexity of upper applications and assurance of strong robustness, demanding the advance of system safety by means of promoting embedded operating systems. Temporal isolation mechanism is an important part to improve the safety of system, proposing a hierarchical dynamic real-time scheduling framework to be the implementation of temporal isolation. This paper uses the homogeneity of task to generate task sets, which can be the basis of hierarchical framework for task partitioning; testifies the schedulable condition of the framework, designs the structure of scheduling algorithm and realizes the dynamic switching of scheduling algorithm. The simulation result and theoretical analysis indicate that the issued framework can improve the safety of system and dynamically adjust to the variation of system load while guaranteeing the stability of time complexity of context switch.
  • XU Peng-yu, XU Zi-can
    Computer Engineering. 2013, 39(11): 289-294. https://doi.org/10.3969/j.issn.1000-3428.2013.11.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low expansibility and high cost in the deployment of the monitoring system, this paper proposes a kind of method about the design and implementation of monitoring system based on Session Initiation Protocol(SIP), designs an open communication system which supports the multimedia services such as the video conference, the video monitoring and the VoIP based on SIP, and achieves the function such as the network establishment, the collection, coding and decoding of the video and audio, the cancellation and payback of the echo, emergency calls and so on. The result proves that the system can be deployed conveniently with other Internet protocols in all kinds of network environment, and supplies IP multimedia service abundantly.
  • HOU Peng-peng, WU Yan-jun, XIE Pei-dong
    Computer Engineering. 2013, 39(11): 295-298,302. https://doi.org/10.3969/j.issn.1000-3428.2013.11.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The memory detection tools such as MemLeak and MemWatch are not very efficient when there the operation of malloc/free is high frequently called. This paper presents a new technique to salve this problem. It uses the new technique to improve the tool MemLeak, and improved tool is applied to memory leak detection of big data memory system Redis. Practicability test based on open software Redis shows that the technique has better actual value.
  • LIU Wei, YAO Yuan-cheng, QIN Ming-wei
    Computer Engineering. 2013, 39(11): 299-302. https://doi.org/10.3969/j.issn.1000-3428.2013.11.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the existing timing synchronization algorithm needs a long time acquisition, poor stability and high error rate, an improved Gardner synchronization algorithm is proposed. On the analysis of different interpolation filter, piecewise parabolic interpolation filter is used in synchronization structure. For QPSK signal, open-loop and closed-loop analysis is carried out respectively. Simulation result shows that compared with previous algorithms, the proposed synchronization method has 0.005 s shorter acquisition and then the mean square error is reduced. Moreover, the bit error rate is significantly reduced while signal to noise ratio is greater than 8 dB.
  • MA Ran, LIU Yan, CHU Dong-zhi
    Computer Engineering. 2013, 39(11): 303-306,311. https://doi.org/10.3969/j.issn.1000-3428.2013.11.068
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that traditional Total Organic Carbon(TOC) laboratory analysis methods are complex to operate and inconvenient to maintain, this paper proposes a processing system of dim light signal, which is used in the field analyzer of total organic carbon in seawater. This system uses a photomultiplier tube to detect weak signals produced from chemiluminescent reactions, and uses dim light signal processing circuitry combined with complex filtering algorithm on signal for further signal processing to improve the Signal to Noise Ratio(SNR). The results demonstrate that the system compared with the traditional methods of laboratory analysis, has the advantages of high speed stability and pollution-free.
  • XIA Xiang-long, CHEN Jin-ping, HU Chun-guang
    Computer Engineering. 2013, 39(11): 307-311. https://doi.org/10.3969/j.issn.1000-3428.2013.11.069
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Real-time measurement and control under mega data condition and with long-term stability is a hard task for a computer with Windows operating system. The current electronic control part in the reflectance difference spectrometer meets the same problem. To improve its real-time capability, a design based on Field Programmable Gate Array(FPGA) development board is proposed. The technique of NiosII softcore from Altera is used to build a control platform for performing high speed USB communications among FPGA board, computer, and detector. Functions for synchronizing the working point between detector and encoder and reading data from encoder are realized in a Verilog Hardware Description Language(VHDL) file, which is integrated in the softcore platform as a module. Experimental results show that the new controller meets the requirements of real-time capability and multi-task control. A frame with softcore processors and self-programmed functional modules is a reasonable solution for on-board control in scientific instruments.
  • YU Han, XIA Xian-cheng, DU Ya-juan
    Computer Engineering. 2013, 39(11): 312-316. https://doi.org/10.3969/j.issn.1000-3428.2013.11.070
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the application of high-integrated DaVinci digital media processor, this paper proposes a high performance design method of embedded audio and video collection module. The module uses TMS320DM6467T as the core processing unit, and combines image coding and decoding chips including TVP5150, SII9125, ADV7342 to realize multi video I/O, and it realizes the functions of health management through monitoring voltage, current, temperature and BIT. The module design complies with 3U CPCI standard and it has rich function interface, is easy to reinforce. Experiment proves that the module can meet the high performance, high reliability and high safety application requirements of terminal equipment for vehicle mobile environment.
  • LEI Jian-mei, BAI Yun, FENG Yu-ming, LAI Zhi-da, HUANG Xue-mei
    Computer Engineering. 2013, 39(11): 317-320. https://doi.org/10.3969/j.issn.1000-3428.2013.11.071
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A reverse modeling technology is discussed with the help of image processing software and geometrical modeling software. Photos of the car prototype and measured data of important parts on the car body is processed to two-dimensional views and body structure size, and the three-dimensional digital model of the car body is reconstructed based on this information. Simulation results of automobile FM and GPS antenna gain patterns on the reverse model and a forward model based on original car data shows very good consistency, which verifies that the reverse model can be used as a very good substitute when original car data is unavailable. The reverse modeling method and technology discussed solve the problem of data platform source for most performance analysis tasks lack of original car data.