Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 April 2014, Volume 40 Issue 4
    

  • Select all
    |
  • LIU Hao-yang, ZHU Yong-xin
    Computer Engineering. 2014, 40(4): 1-6. https://doi.org/10.3969/j.issn.1000-3428.2014.04.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With ASIC system’s efficiency and software programmable feature, reconfigurable concept brings further improvement for computer’s performance. In the context of cloud computing applications, reconfigurable systems require a file system to efficient process massive small files on the Internet. Therefore, this paper describes and analyzes existed small file systems, and proposes a Field Programmable Gata Array(FPGA)-based small file system: FPGASmallFS. By simplifying its structure and adjustable disk volumes block, and with FPGA’s acceleration, this file system shows a way to improve the speed of file system and disk space utilization. In the meanwhile with the parallel acceleration of FPGA, it speeds up the file system mount process, so as to improve the system of reconfigurable process. The IOPS performance tests result of the file system proves the better availability of FPGASmallFS system compared with ReiserFS and Ext2 system.
  • LI Li, YAN Tian-yun
    Computer Engineering. 2014, 40(4): 7-13. https://doi.org/10.3969/j.issn.1000-3428.2014.04.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To figure out the solution of the low reliability of cloud storage service caused by the Byzantine failures or malicious attacks, a secure and reliable cloud storage scheme is designed which enhances the reliability with near-optimal overall performance. To enable efficient decoding for data users in the data retrieval procedure, this paper adopts a LT codes for adding data redundancy in distributed cloud servers. In addition, the data owner from the burden of being online is released by enabling public data integrity check and employing exact repair. Furthermore, this paper proposes an exact repairing solution so that no meta data needs to be generated on the fly for repaired data. Experimental results show that the proposed scheme remarkably doubles the efficiency of data retrieval while only adding cost by 15%, compared with the existing cloud storage solutions.
  • LIU Ya-qiu, WU Shuang-man, HAN Da-ming, JING Wei-peng
    Computer Engineering. 2014, 40(4): 14-18. https://doi.org/10.3969/j.issn.1000-3428.2014.04.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In view of the increasing difficulty in taking a taxi because of the traffic jam, an intelligent taxi calling system based on cloud computing is proposed, which consists of cloud server and Android ends. This paper adopts Map-Reduce model to process the K-means clustering algorithm in a parallel way to improve the quality and efficiency of pushing data on the cloud server side and uses LocationClient interface, MapView interface and MKOfflineMap interface to implement the location service, overlays display and update service and Baidu offline map service respectively on the Android smart ends. And the Remote Procedure Call(RPC) service is used to realize the push of the data information which is serialized by Protocol Buffer(PB) protocol between the cloud server and Android ends. The designed system, which gets about 20% improvement in searching and push efficiency and 90% reduction in data traffic and big improvement in response time.

  • SHAN Liang
    Computer Engineering. 2014, 40(4): 19-25,31. https://doi.org/10.3969/j.issn.1000-3428.2014.04.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The media resource files used by multi-projector display system are scattered in the local, and require a lot of storage space. Using local storage system can not hold these files of all projects. Therefore, an online project management solution based on cloud computing for multi-projector display system is proposed. It uses Amazon cloud platform to provide online project management for the multi-projector display projects, and makes use of the mass storage space and scalability computing ability of the cloud platform to support storing the project files online with the help of Amazon file storage services. It uses online project management to register and create project and schedule information, exports all the related project files, packs and uploads them onto the cloud server. Test result shows that the relationship between projects and project files are established, which provides users to manage all of their projects and then release the local storage pressure.

  • HAN Yi, JIANG Jian-guo, QIU Xin-liang, MA Xin-jian, ZHAO Shuang
    Computer Engineering. 2014, 40(4): 26-31. https://doi.org/10.3969/j.issn.1000-3428.2014.04.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of wide range of malware and large analysis workload, in this paper, with the use of VMware vSphere virtualization technology, an automatic malware detection system upon the cloud platform is designed and implemented. This platform adopts polling mechanism to monitor the load of virtual machines in servers, conducts preprocessing of collected suspicious samples according to their type and tests the samples using correspond server resources. It can offer users a variety of virtual environment, automatic analysis malware’s four host behavior of files, registry, processes and network, provides online analysis report, and effectively responses to the problem of wide range of malicious programs, eliminates the analyzing workload, improves the efficiency of analysis. Experimental result on real samples shows that this platform can provide more precise character and threat information of analyzed samples compared with Jinshan Fireeye and Threat Expert platform.
  • LIU Fei, LUO Yong-long, GUO Liang-min, MA Yuan
    Computer Engineering. 2014, 40(4): 32-36. https://doi.org/10.3969/j.issn.1000-3428.2014.04.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to achieve cloud’s objective of providing low-cost and on-demand services for customers preferably, and to meet the customer’s personalized demands, a dynamic trust model for the personalized cloud services is proposed in this paper. It defines the personalized cloud service based on the fine-grained theory, and revises the direct trust with the time-decline coefficient and the incentive mechanism. On the other hand, it composes the recommendation trust value with the recommendation credibility and the evaluation similarity that is figured out based on the grey system theory. To improve the accuracy and scientific of trust evaluation, the method of dynamic setting self-confidence factor based on the evaluation similarity is designed. Experimental results show that the average transaction success rate can be increased by 4% and 11% compared with GM-Trust and CCIDTM model.
  • ZHAO Ming-lei, ZHAO Wen-dong, PENG Lai-xian, CHENG Ang-xuan
    Computer Engineering. 2014, 40(4): 37-41,47. https://doi.org/10.3969/j.issn.1000-3428.2014.04.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is difficult to achieve flexible sharing of computing resources between heterogeneous systems, which is the major obstacle to the performance promotion of distributed information systems. The emergence of Service-oriented Architecture(SOA)-based Web service technology provides an effective means of computing resources sharing in heterogeneous systems. In order to overcome the shortcomings of performance bottleneck and single point of failure in centralized Web service systems, a novel distributed dynamic service composition algorithm based on Business Abstract Plan(BAP) is proposed. Using this approach, composite solutions can be constructed quickly, and the BAP repository can be expanded automatically by the results of current composite plans, which can increase the response rate of service requests gradually. Simulation results show that the proposed algorithm can reduce the average response time and improve composition efficiency in distributed environments.
  • ZHANG Yong-yue, YUN Li-jun, SUN Yu
    Computer Engineering. 2014, 40(4): 42-47. https://doi.org/10.3969/j.issn.1000-3428.2014.04.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the schedulability determination problem of avionics system two-level scheduling model which only includes periodic task set and abides by ARINC653 multi-partition architecture, an avionics system scheduling analysis tool based on partition is proposed. Through the setting of clock variables, the tool can simulate avionics system task set scheduling process in each partition, determine the simulation interval according to the characteristics of periodic task set and partition avionics system time slot disposition, optimize the algorithm of scheduling analysis, and estimate the accurateness of avionics system partition-level time slot disposition and the schedulability of task set in each partition. Test and instances analysis results show that this tool has automatic, accurate and fast advantages to determine the schedulability of avionics system partition-level and task-level scheduling model, and can describe task scheduling process by Gantt chart. Compared with other existing tools, it is more intuitive and efficient.
  • YANG Yong, QIAN Zhen-jiang, HUANG Hao
    Computer Engineering. 2014, 40(4): 48-52,56. https://doi.org/10.3969/j.issn.1000-3428.2014.04.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to prevent the kernel attack within Android system and protect the kernel of Android system, this paper designs a lightweight hypervisor monitoring architecture based on ARM platform. By applying ARM virtualization technology and isolating un-trusted module, this architecture prevents malicious code damage to kernel and the falsification of key objects within the kernel. Moreover, it can detect rootkit with cross view. Experimental results show that this architecture can promptly stop the falsification of monitoring object and quickly detect rootkit and thus reduce the loss of attack on system.
  • TANG Hai-dong, WU Yan-jun
    Computer Engineering. 2014, 40(4): 53-56. https://doi.org/10.3969/j.issn.1000-3428.2014.04.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at distributed synchronous system Zookeeper’s low-efficiency problem in a large-scale computer cluster, this paper puts forward an automatic response-node set algorithm based on the method of member node election. In a large-scale Zookeeper system, using a factor configurable election algorithm(includes computing capacity, disk reads and writes rate, request rate, failure rate and network latency test), it picks out one or several of the most suitable nodes for completing the Zookeeper’s response work, responding to the data updating request, it reduces the system’s response time, and improves the performance of the system. Experimental results show that, compared with the manual setting response node algorithm, the automatic election algorithm can always elect the most suitable nodes, and it has high efficiency, stable performance. In the tests of system’s access latency, automatic election algorithm has a response latency decrease of 11% than manual setting node in average, and a decrease of 17% than manually set’s maximum response latency.
  • LIANG Dong, ZANG Dong-song, HUO Jing, SUN Gong-xing, Valentin Kuznetsov
    Computer Engineering. 2014, 40(4): 57-63,70. https://doi.org/10.3969/j.issn.1000-3428.2014.04.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Compact Muon Solenoid(CMS) experiment on the Large Hadron Collider(LHC) produces PBs of physics data. Those data not only are huge on volume, but also have complex types and being distributed all over the world. Therefore the meta-data about how to organize those physics data reach TB in size. Those meta-data are kept in different relational or non-relational data sources in different format. In order to meet the data discovery requirement, it is important to provide a unified query interface. By adding a caching layer upon those data sources, this paper implements a data aggregation system, which provides precise keyword style search interface. It demonstrates how to support user queries by multiple mapping and aggregation, and how to manage the cache efficiently. Experimental result shows that more than 70% user queries can be answered by the cache system, and it has well queries performance.
  • YU Guang-zhou
    Computer Engineering. 2014, 40(4): 64-70. https://doi.org/10.3969/j.issn.1000-3428.2014.04.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the disadvantage of larger delay in existing data gathering methods, this paper proposes an optimal construction algorithm of network topology for data collection. The k subgraphs are found from the given connected graph of network, which minimizes the distance between the vertices, and then using the Hungarian method to achieve the reduction of edges in k subgraphs until obtainning a spanning tree. In order to reduce the control overhead, this paper also proposes a distributed algorithm for constructing the network topology, which improves the adaptability of algorithm in different scenes. Theoretical analysis and experimental results show that the performance of the method is superior to the traditional algorithms such as Single Chain(SC), Single Cluster 2-Hop(SC2H), and Minimum Spanning Tree(MST) in terms of the data collection delay and lifetime of network.
  • WEI Liang-fen, LIU Tao, WANG Yong
    Computer Engineering. 2014, 40(4): 71-75. https://doi.org/10.3969/j.issn.1000-3428.2014.04.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Concerning the buffer resources of typical routers in Network on Chip(NoC) are not fully utilized and the design of the large buffer capacity is restricted. In this paper, a new design of router for buffer resource contention in NoC is proposed without increasing the buffer and virtual channel. When an input terminal is busy and occurs resource contention in the new router, blocking the packet is assigned to other input port of having idle buffer resources, thus contention problem of the buffering resource is solved and overall network performance is improved. The System C simulation results show that, compared with basic router, the proposed router has higher network saturation rate and throughput under non-uniform traffic pattern and hot spot pattern, the saturation rate is increased by 11.4% especially under the hot spot patterns. FPGA implement results show that the router area overhead is small, and it can better meet the application requirements of NoC.
  • WANG Tao, LI Wen-wei
    Computer Engineering. 2014, 40(4): 76-80. https://doi.org/10.3969/j.issn.1000-3428.2014.04.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The existing sender-based methods for identifying bit error packet loss causes have problems such as large communication costs, and low identification accuracy. After theoretically analysis, this paper finds out when collision loss occurs in wireless transmission, the Error Vector Magnitude(EVM) of packet at receiver is larger than that of weak signal loss, which also verifies the differentiating effect of EVM on causes of bit error packet loss. Then a receiver-based method for identifying the causes of bit error packet loss in Wireless Local Area Network(WLAN) is proposed. It directly reads Received Signal Strength Indicator(RSSI) and EVM values at receiver, and adopts the Bayes classification algorithm for identifying packet collision loss and signal loss. Experimental results also show that, compared with COLLIE identification method, it has no additional communication cost, accuracy of this method is increased, while fault alarm rate and miss alarm rate are decreased.
  • LI Ming-yuan, LI Ou, SUN Wu-jian
    Computer Engineering. 2014, 40(4): 81-86. https://doi.org/10.3969/j.issn.1000-3428.2014.04.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional double-threshold cooperative spectrum sensing algorithms ignore the local sensing information of Cognitive Radio(CR) users between the two thresholds. However, using such information can further advance the sensing performance of CR system, so this paper puts forward a sequential adaptive step combination algorithm based on equal gain combining. The CR users between the two thresholds upload their local sensing information step by step with descending order in accordance with the received Signal-to-noise Ratio(SNR), the Fusion Center(FC) adaptively adjusts the number of CR users participated in the collaboration to reduce the data overhead in CR networks. The formulae of this algorithm upload average bits and detection probability of cognitive users over Rayleigh fading channel conditions are deduced. Theoretical analysis and simulation results show that this algorithm can obtain better cooperative sensing performance with relatively fewer data overhead.
  • ZHU Li, GU Neng-hua, YAO Ying-biao, FAN Yi-ming
    Computer Engineering. 2014, 40(4): 87-90.95. https://doi.org/10.3969/j.issn.1000-3428.2014.04.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Localization technology is one of key technologies in Wireless Sensor Network(WSN) application. Focusing on the localization problem of WSN, this paper proposes a Received Signal Strength Indicator(RSSI)-based two-dimensional logarithmic search localization algorithm. The algorithm adopts modified RSSI-based ranging model to estimate the distance of nodes, utilizes the results of centroid localization algorithm as search initial point, and employs two-dimensional logarithmic search method whose objective function is to minimize the sum of distance error with improved weighting factors to distributed realize self-localization. Experiments compare the localization performance of centroid localization algorithm, two-dimensional logarithmic search algorithm without weight, and two- dimensional logarithmic search algorithm based on RSSI. Results show that localization precise performance of proposed algorithm is much better than that of centroid algorithm, and improves 0.02R compared with two-dimensional logarithmic search algorithm without weight.
  • HUANG An-qi, FENG Chao, SUN Jian-feng, TANG Chao-jing
    Computer Engineering. 2014, 40(4): 91-95. https://doi.org/10.3969/j.issn.1000-3428.2014.04.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Ultra High Frequency(UHF) Radio Frequency Identification(RFID) systems are widely used, and the researchers focus on the characters of physical layer and Media Access Control(MAC) layer to enhance the reading accuracy, shorten the reading slots and ensure the security. It is primary to detect the existed RFID communication process and acquire the low-layer information, while the reader only provides the higher-layer results. To solve the problem of acquiring the RFID low-layer information, this paper presents the passive detection of wireless signal by Universal Software Radio Peripheral(USRP). The platform of information detection is realized through Digital Signal Processing(DSP). Test result illustrates the high receiving sensibility and analysis accuracy of the detection platform, and 8 m detection distance exceeds the other platforms. The sensibility of detection is ?115 dBm, the time of reaction is only 0.068 s, and the platform can resist the common interference of wireless signal.
  • JIANG Shen, MA Rong-juan
    Computer Engineering. 2014, 40(4): 96-102,107. https://doi.org/10.3969/j.issn.1000-3428.2014.04.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Opportunistic routing is widely known to have substantially better performance than traditional unicast routing in wireless networks with lossy links. However, Wireless Sensor Network(WSN) has heavily duty cycle, which renders existing opportunistic routing schemes impractical. In this paper, it proposes a novel opportunistic routing protocol based on Estimated Duty Cycled Wake-ups(EDC). It establishes key properties of EDC as routing metric to support distributed computing, and it can leads to a loop free topology. In addition, theoretical analysis shows that EDC is an accurate approximation of the true number of duty-cycled wake-ups required to forward the packet. Experimental result on Twist and Motelab testbed shows that the performance of the proposed protocol is better than the ETX-based opportunistic routing protocol in terms of radio duty cycle, time delay and forwarding node number.
  • SUN Qi, QIAN Han-wang, LIU Jian-po, QIU Yun-zhou
    Computer Engineering. 2014, 40(4): 103-107. https://doi.org/10.3969/j.issn.1000-3428.2014.04.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, the Guaranteed Time Slot(GTS) allocation mechanism used in IEEE 802.15.4 networks for time critical applications is analyzed. In order to improve the utilization of the channel, mainly two changes are made. First the GTS time slots are divided into minislot adaptively. Then the GTS request command is changed slightly; instead of explicitly indicate the GTS length, the number of GTS packet and packet length are notified. Simulation results show that the proposed protocol enhances the channel utilization and network throughput.
  • ZHANG Liang, LIU Jing-hao, LI Zhuo
    Computer Engineering. 2014, 40(4): 108-111,115. https://doi.org/10.3969/j.issn.1000-3428.2014.04.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Named Data Network(NDN) is a kind of content centered network architecture, which is effectively improved sharing utilization rate of network resource. But name of NDN is much longer than the traditional IPv4’s and IPv6’s and its length is variable. Thus it is important to implement fast named retrieval of NDN, and it has great significance to further improve network performance. Aiming at the problem, this paper presents a division named retrieval method based on Hash mapping. Names are decomposed into components and computed CRC32 Hash values which are stored in corresponding tables. After the values in Hash tables are quickly sorted, they can be located by binary search. And it is benefit to fast detect Hash collisions in the table, because of the tables with increasing data structure. And it uses Hash value added marker bits to solve the problem of Hash collisions. Experimental results show that the division named retrieval method improves the storage compression rate by about 65% and significantly improves the retrieval speed compared with retrieval method of establishing Name Prefix Tree(NPT).
  • WANG Wen, LENG Wen, WANG An-guo, LIU Shan
    Computer Engineering. 2014, 40(4): 112-115. https://doi.org/10.3969/j.issn.1000-3428.2014.04.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the previous problem that the estimation range and estimation accuracy can not take into account of frequency offset estimation, a novel frequency offset estimation algorithm is proposed based on the autocorrelation function after analysis and comparison of several existing algorithms. In the algorithm, the correlation calculation between the received signal and the conjugate of local aided data is operated, then adjacent symbols of the autocorrelation function of the obtained signal are divided mutually to unwrap the phase, and the phase difference sequence is obtained and used to estimate the frequency offset, finally makes weighted average of the results. Have a detailed analysis of the performance of the proposed algorithm, theoretical estimation range can be up to 50% of the symbol rate. Simulation results show that the algorithm is able to achieve the estimated variance close to Cramer-Rao Bound(CRB) when SNR≥?5 dB. The results show that the proposed algorithm solves the contradictory problem between the estimation range and estimation accuracy effectively, which proves its practicability.
  • HUANG Zhi-qiang, WANG Mei-qing
    Computer Engineering. 2014, 40(4): 116-119. https://doi.org/10.3969/j.issn.1000-3428.2014.04.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Binary tree structure solves the problem of communicating pairs of peak points. The triangle predictor and error energy estimator improve the image quality and payload. In view of the inaccurate of the triangle predictor, a reversible information hiding algorithm based on neighborhood prediction difference histogram shifting is proposed. It uses neighborhood prediction instead of triangle prediction, and the error energy and the way to extract the secret message are accordingly modified. In the experiment of the seven most comment pictures, the capacity is improved by an average of 8.7%. In some same situations, the capacity is improved by 300%. Experimental results reveal that the proposed method has a more precise predictor and a higher capacity and it is more confidential.
  • WU Chun-ying, LI Shun-dong, CHEN Zhen-hua
    Computer Engineering. 2014, 40(4): 120-123,129. https://doi.org/10.3969/j.issn.1000-3428.2014.04.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For hierarchical key management, this paper presents an efficient verifiable secret sharing scheme. It divides the set of participants into multi-partite, each part is called a compartment, where the participants in one compartment can share the secondary secret, and the master secret can be distributed among the whole set of participants. Each participant only holds one short share, which can be used to reconstruct a large master secret. It realizes its verifiable property by using two-variable one-way function, preventing dishonest participants from cheating. It can increase and delete the the participants, change the value of threshold and shares value dynamically. Thus the scheme can be applied to key hierarchy management. Analysis result shows that the scheme has good performance and security.
  • HUANG Kun,DING Xue-feng, LI Jing
    Computer Engineering. 2014, 40(4): 124-129. https://doi.org/10.3969/j.issn.1000-3428.2014.04.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at dealing with the security problem of key exposure and the problem of transmitting big encrypted data in peer-to-peer network, this paper proposes the first novel Identity-based Key-insulated Encryption(IB-KIE) scheme with message linkages by using a key-insulated cryptographic mechanism and plaintext block chain encryption method. In the proposed scheme, each client can periodically update his private key while the corresponding public one remains unchanged. Under the random oracle model, it formally proves that the IB-KIE with message linkages achieves the security requirement of Indistinguishability Against Adaptive Chosen-Ciphertext Attacks(IND-CCA2). The essential security assumption of the proposed scheme is based on the well-known Bilinear Difie-Hellman Problem(BDHP). The proposed scheme has the properties of unbounded time periods and random-access key-updates. By comparing the IB-KIE with message linkages and the basic scheme, the length of the ciphertext in the prior scheme is only half of that in the later. So the IB-KIE with message linkages is suitable for transmitting big encrypted data in Peer-to-Peer(P2P) network.
  • GONG Bo-ru, ZHAO Yun-lei, Rudolf Fleischer, WANG Xiao-yang
    Computer Engineering. 2014, 40(4): 130-135,140. https://doi.org/10.3969/j.issn.1000-3428.2014.04.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As one of classic identification schemes in cryptography, Schnorr’s scheme can be applied with respect to many underlying mathematical hard problems, as well as discrete logarithm problem. Moreover, it can apply the Fiat-Shamir transform to convert a secure Schnorr’s identification scheme in standard model into a secure digital signature scheme in the random oracle model. Aiming at the condition, this paper analyzes the features of Schnorr’s scheme, the necessary and sufficient conditions for a secure Schnorr’s scheme are derived, and hence it can construct the secure Schnorr-like scheme in a broader sense. Then, by using the concept of Schnorr-like scheme, it can prove rigorously that the existence of aborts in some lattice-based identification scheme, such as ∑-identity authentication scheme is inevitable, which sheds light on a better understanding of the lattice-based signature in the future.
  • DENG Yu-qiao
    Computer Engineering. 2014, 40(4): 136-140. https://doi.org/10.3969/j.issn.1000-3428.2014.04.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In practical applications, most attribute-based encryption schemes have problems due to the static of attributes. To overcome this problem, this paper develops a dynamic ciphertext-policy attribute-based encryption scheme based on bilinear pairings, and the scheme is related to the conditions-based encryption. In the scheme, user is allowed to calculate its own attribute key and ciphertexts once it satisfies certain attribute after the authenticating party’s digital signature is given. The security of the scheme is discussed at last, and it shows that the ciphertexts are computational distinguish when given two encryptions of plaintext with same length.
  • WANG Qiu-yan, JIN Chen-hui
    Computer Engineering. 2014, 40(4): 141-145,150. https://doi.org/10.3969/j.issn.1000-3428.2014.04.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    LEX is one of stream cipher algorithms that progressed to Phase 3 of the eSTREAM project, and it is designed based on block cipher algorithm AES. In this paper, a related-key attack based on guess and determination attack for LEX cipher is proposed. If 239.5 Byte key streams under a pair of related-keys respectively are known, with the method of differential cryptanalysis and some properties of AES, the attack can recover the entire candidate keys by guessing 2 Byte key and 8 Byte difference of the internal state, and then find the correct key by encipher test. Analysis results show that the success ratio is 1, and the time complexity is 2100.3 AES encipher operations.
  • WANG Zhi-he, YANG Yan
    Computer Engineering. 2014, 40(4): 146-150. https://doi.org/10.3969/j.issn.1000-3428.2014.04.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional data stream clustering algorithm is based on grid clusters at the grid of same granularity, and it improves processing speed, but the accuracy of cluster is lower. In this connection, a new data stream clustering algorithm DBG-Stream based on double-layer grid and density is put forward. The algorithm uses grids of two different granularity to cluster data stream, and by learning the idea of CluStream algorithm, it divides the clustering process into two stages. The first one is that applying coarse-grained grid cells to form the initial cluster in the online process, and the second one is that on the fine-grained grid cells, making secondary clustering for grid cell located on the boundary cluster in the offline process so as to improve the accuracy of cluster. At the same time, it enables the automatic setting of key parameters. Besides, it improves the efficiency of the algorithm by the strategy of deleting grid. Experimental results show that the DBG-Stream algorithm clustering accuracy greatly improves compared with D-Stream algorithm, and it effectively solves the problem of traditional grid-based clustering algorithm.
  • XIE Yue-fei, CAI Xiao-dong, LIN Jing-liang, ZHANG Xue-min, WU Dan
    Computer Engineering. 2014, 40(4): 151-153,158. https://doi.org/10.3969/j.issn.1000-3428.2014.04.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the characteristics of the human visual, this paper presents a video image stabilization method that calculating the jittering component of the camera motion based on the curve fitting method, and does compensate to the jittering component. Global camera motion is estimated by using feature points of the image background. The jittering component of the camera is computed by curve fitting method. It only does jittering compensation in order to minimize the image displacement vector and effectively decrease the lost image information. Experimental results show that the jittering component is less 2 degree after stabilization, when the original video captured by moving camera is within 20 degree. And the loss of image information is less than 5%. As a result this method is effective, and better to retain the integrity of the content of the video frame after image stabilization.
  • XU Wen, Lü Ke, YANG Lei, LIN Zheng-zong, ZHAI Rui
    Computer Engineering. 2014, 40(4): 154-158. https://doi.org/10.3969/j.issn.1000-3428.2014.04.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is a common method to use land-sea boundary as a feature in satellite image navigation and registration. It is especially useful during the registration of IR satellite image which has huge change in image grey value between day and night. As the reference image in the image registration, the accuracy of land-sea boundary template decides the final accuracy of image navigation and registration. According to the characteristics of the geostationary satellite image navigation process, this paper proposes a method to generate the land-sea boundary template. The method contains the generation of aim grid, choosing the world coastline database and the search algorithm. It proposes three search algorithms by studying the characteristics of the data, and analyzes the accuracy and efficiency of each algorithm. Experimental results show that the matching degree between the land-sea boundary template and the template from international mainstream tools is more than 90%. The algorithm has high efficiency and good performance during practical applications.
  • GUO Yun-long, PAN Yu-bin, ZHANG Ze-yu, LI Li
    Computer Engineering. 2014, 40(4): 159-163,169. https://doi.org/10.3969/j.issn.1000-3428.2014.04.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development and popularity of the new technology and social network, the data volume of micro-blog users surge sharply. Related research causes increasing attention from both academia and industry. This paper proposes a new statistical method on feature extraction. Classification performances of different schemas such as Support Vector Machine(SVM), Naive Bayes and K-Nearest Neighbour(KNN) are analyzed carefully. It proposes a combined model based on D-S theory to take the advantages of different classifiers. A series of experiments based on the Chinese Micro-Blog data provided by CCF NLP&CC 2012 are conducted, and it gets the average estimate 72.7% in precision, 61.5% in recall and 64.7% in F-measure of NLP&CC 2012 as a baseline. Experimental results show that the method can achieve significant enhancement in both recall and F-measure with 70.6%, 89.2% and 78.9%, respectively, and F-measure is even 0.5% higher than the best result of NLP&CC 2012.
  • CHENG Gong, FANG Yu-chun, YU Chan-juan, LI Yang
    Computer Engineering. 2014, 40(4): 164-169. https://doi.org/10.3969/j.issn.1000-3428.2014.04.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Face semantics retrieval is a key point in today’s biometric recognition technology. Such as facial expressions recognition, gender classification and age estimation, they all accomplish their functions by catching semantics. This paper researches face semantic information in face retrieval, and proposes a face semantic subspace extraction method. Semantic subspace learning is divided into dictionary building and sparse learning. In the process of dictionary building, this paper gives the method of semantic difference to calculate mutually exclusive semantics, and extracted semantics is not disturbed by other semantics. Through testing different combination in different semantic environment, result proves that the method is more stable. In sparse learning, Lasso algorithm is improved, and result shows that compared with Fisher method, the subspace effect has increasement.
  • JING Jing, XU Guang-zhu, LEI Bang-jun, HE Yan
    Computer Engineering. 2014, 40(4): 170-174,181. https://doi.org/10.3969/j.issn.1000-3428.2014.04.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low tracking precision caused by the discriminant function which has insufficient consideration for target appearance in the popular real-time tracking algorithm based on compressed domain, this paper proposes an improved algorithm. The low-dimensional multi-scale features of the candidate targets are extracted with a sparse measurement matrix. A Bayes classifier is adopted to discriminate the target and background according to online updating probability distribution of features, which realizes coarse tracking. On the basis of the coarse tracking result, the second tracking is carried out based on a dynamic appearance model of the target to search for the optimum tracking position online by measuring the local region similarity of the candidate targets between video frames. Test results for some challenging videos show that the proposed algorithm can improve the original tracking precision effectively without introducing too much computation.
  • WU Hong-he, CHEN Li-fei, GUO Gong-de
    Computer Engineering. 2014, 40(4): 175-181. https://doi.org/10.3969/j.issn.1000-3428.2014.04.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Variable-order Markov Model(VLMM) is a simple but effective model for event sequences modeling. However, the classic VLMM only considers the transition probability, without taking into account the frequency of the substring. A Weighted VLMM(WVLMM) is proposed in this paper, constructing a Weighted Probabilistic Suffix Tree(WPST) via the frequency of the substring based on the classic VLMM. It also proposes a strategy for branches pruning based on the degree of the similarity of the nodes while constructing the tree, in order to improve the generalization ability of the model, and to construct the tree in a linear time complexity. To validate the effectiveness of the model, the proposed model is applied to the classification of event sequences. Experimental results demonstrate that the new model can make an effective classification on real-world sequence datasets in different applications.
  • YANG Jun, ZHANG Rui-feng, WANG Xiao-peng, LIN Yan-long
    Computer Engineering. 2014, 40(4): 182-186,191. https://doi.org/10.3969/j.issn.1000-3428.2014.04.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of illumination preprocessing of face recognition. A new preprocessing algorithm for human face illumination is presented based on image guided filtering. Guided images and input images are classified by an illumination standard function, and Histogram Equalization(HE) is used to adjust images after they are processed by one of non-linear transformation algorithms, logarithm or exponential. Image guided filtering is applied to enhance image details in order to make the processed images clearer. A spatial domain high pass filter is adopted to restrain local sharpening phenomenon. A series of experiments are performed on the YaleB human face image database. Experimental results show that the proposed method is obviously superior to principal component analysis. The recognition rate can be improved by 2%~8% in comparison with other methods.
  • GAO Yu-ming, ZHANG Ren-jin
    Computer Engineering. 2014, 40(4): 187-191. https://doi.org/10.3969/j.issn.1000-3428.2014.04.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of BP neural network, namely easily getting stuck in a local minimum and slow convergence rate, using Genetic Algorithm(GA) to optimize BP neural network is proposed to predict house price. This paper forms prediction model for the house price by using BP neural network. The GA optimizes the connection weights and structure of BP neural network. The house price in Guiyang and its main influencing factors are selected from 1998 to 2011. The historical data are used as the experimental data, to train and simulate respectively through traditional BP neural network and BP neural network optimized by GA. Experimental results show that, compared with the traditional BP neural network, the BP neural network optimized by GA can make convergence rate quicker, and improve the prediction accuracy.
  • YANG Quan, PENG Jin-ye
    Computer Engineering. 2014, 40(4): 192-197,202. https://doi.org/10.3969/j.issn.1000-3428.2014.04.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to effectively recognize the sign language alphabet, this paper presents an algorithm based on Sign Language Visual Word(SLVW). It uses Kinect to obtain the video and depth image information of sign language gestures, calculates spindle direction angle and mass center position of the depth image to adjust the search window and for gesture tracking which depends on depth image information DI_CamShift. An Ostu method based on depth integral image is used to gesture segmentation, and the Scale Invariant Feature Transform(SIFT) data are extracted. It generates SLVW from small regions represented by local feature descriptors. After counting the frequency of visual words in a sign language alphabet image, it builds Bag of Words(BoW) to describe manual alphabets and uses Support Vector Machine(SVM) for recognition. Experimental results show that this method has high recognition accuracy and good robustness. Meanwhile, all of color, light and shadow have no effect on it. The average recognition rate of 30 sign language alphabets in the sign language video under complex background is 96.21%.
  • LIN Chen, LI Hong-yu, NIU Jun-yu
    Computer Engineering. 2014, 40(4): 198-202. https://doi.org/10.3969/j.issn.1000-3428.2014.04.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the number of basic factors needed to describe the color spectral and the structure of color space exacted from the color spectral, this paper provides a new analysis method different from traditional linear techniques of dimension reduction, which is based on manifold learning, supposing high dimensional data of color spectral lies in a low dimensional manifold, transforming the problems of number of basic factors and the extracted structure of color space to the problems of the intrinsic dimension and structure of the embedded manifold of the spectral color space. Five different techniques of intrinsic dimension estimation and six classic manifold learning algorithms are employed to study the spectral dataset of Munsell colors. Experimental results reveal that there exists a 3-dimensional manifold embedded in the spectral Munsell color space and the geometric structure of this manifold looks like a cone, consistent with the original development of the Munsell color system.
  • ZHOU Hong-zhi, CHENG Xiang-yang
    Computer Engineering. 2014, 40(4): 203-208,213. https://doi.org/10.3969/j.issn.1000-3428.2014.04.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the disadvantages of the local anomaly detection in most video anomaly detection schemes, this paper proposes a video anomaly detection scheme based on local spatio-temporal signatures. Motion descriptors are extracted and quantized into small blocks. Spatio-temporal filters at different scales are applied to obtain smooth estimates at each spatio-temporal location for each feature descriptor. Local K-Nearest Neighbor(KNN) distance for each location is computed for training and test video. These local KNN distances are aggregated to produce a composite score for the test and training video. The composite scores are ranked to determine anomalies. To test the performance of the proposed scheme, this paper applies it to several published datasets, such as UCSD dataset, the UMN dataset of crowd anomalies and the Subway dataset. Results show that the performance of proposed scheme is better than the existing video anomaly detection algorithms.
  • DANG Xiao-chao, ZHANG Chun-jiao, HAO Zhan-jun
    Computer Engineering. 2014, 40(4): 209-213. https://doi.org/10.3969/j.issn.1000-3428.2014.04.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Considering the fuzziness in the process of information dissemination, fuzzy algorithm is introduced into the classical model of Cellular Automata(CA) in this paper. By defining two fuzzy variables, social environment fitness and preference degree, the fuzzy CA model of network public opinion propagation process is set up. The evolution process of personal view in the forming process of public opinion is simulated and analyzed by fuzzy logic toolbox of Matlab. The result shows that the number of neutral group gradually increases, while those who hold extreme attitudes gradually decrease after extensive communication and discussion. They reach a compromise that this model can better describe the actual propagation of network public opinion.
  • XIA Su-na, MA Xiao-hu
    Computer Engineering. 2014, 40(4): 214-217. https://doi.org/10.3969/j.issn.1000-3428.2014.04.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To enhance the applicability of Locality Preserving Projections(LPP) in super-resolution of face images, this paper proposes an improved method, Correlation Enhanced Locality Preserving Projection(CELPP), which introduces the method of Canonical Correlation Analyses(CCA) into LPP. CELPP is used for feature extraction, and relationship learning is used to build a bridge for the transformation of high resolution images and low resolution images. Entering low resolution images, through CELPP feature extraction and mapping transformation, the high resolution images are achieved and used for face recognition. Experimental results of ORL and Yale databases show that CELPP is better than LPP and CCA in super-resolution applications because CELPP considers the similarity of high resolution images and low resolution images, and the local structure of the same class images.
  • SHEN Zhen-qian, MIAO Chang-yun, ZHANG Fang
    Computer Engineering. 2014, 40(4): 218-222. https://doi.org/10.3969/j.issn.1000-3428.2014.04.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the existing methods cannot provide the actual length or have the complex camera calibration, so the vehicle queue length detection algorithm and the rapid camera calibration method reference to stop line are provided. It sets the parking detection region and the adjacent frame difference to determine the existence of the parking of vehicles behind the stop line, detects the existence of vehicles by image segmentation with automatic threshold setting based on Otsu method in queue detection region, and detects the connected domain where the vehicle locates and the queue tail by connected domain parameters. The camera calibration method reference to stop line is demonstrated using pinhole imaging model. Experimental results show that the method can accurately detect pixel value of the head and tail, effectively calculate the actual length and meet the real-time requirements.
  • WU Shun-yao, SHAO Feng-jing, WANG Jin-long, SUN Ren-cheng, WANG Ying
    Computer Engineering. 2014, 40(4): 223-227. https://doi.org/10.3969/j.issn.1000-3428.2014.04.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Fusing attribute-level knowledge in the form of key words can effectively improve the performance of document clustering. However, initialization of cluster center of key words is still an open issue. Therefore, this paper utilizes Wikipedia semantics to identify semantic themes, and adopts network-based inference strategy with dynamic resource-allocation to find hidden semantic relatedness according to article collaborative relationship, so as to select the most important documents(initial points) which can reflect semantic themes. It incorporates key words into document clustering by combing metric learning and soft-constraint strategies. Comparisons results with k-means and semi-supervised clustering method with key words on 20Newsgroup collection demonstrate that initialization for document clustering with key words can effectively improve clustering quality. Especially on News_Different_3, the improvement is about 20% under Normalized Mutual Information(NMI) index.
  • HE Wen-jian, LI Yan
    Computer Engineering. 2014, 40(4): 228-232. https://doi.org/10.3969/j.issn.1000-3428.2014.04.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Image fusion algorithm based on gradient field is one of relatively new remote sensing image fusion algorithms. But the fusion algorithm is only suitable for using in less than 1:4 scale ratio of multi-spectral image and panchromatic image. In order to solve the fusion problem, which is caused by the scale differences between Beijing-1 satellite multi-spectral image and panchromatic image as 1:8, this paper presents an image progressive fusion algorithm based on wavelet transformation and gradient field. It uses wavelet transformation to narrow the scale difference of multi-spectral images and high-resolution panchromatic images, and gets the preliminary fusion image by wavelet transform fusion algorithm, and makes preliminary fusion image and high-resolution image fusion based on gradient field. Experimental results show that the average color difference between the progressive fusion image and the multi-spectral image is 23.5. The average gradient difference and the multi-scale texture value differences between the progressive fusion image and the high-resolution panchromatic image are 2.1, 3.98, 10.2, 18.9, that is better than the other mainstream fusion algorithm, which indicates that the matched degree of the texture details between the progressive fusion image and the high-resolution panchromatic image is much better.
  • ZHAO Hai-feng, LU Yu-miao, LU Ming, CHEN Si-bao
    Computer Engineering. 2014, 40(4): 233-236. https://doi.org/10.3969/j.issn.1000-3428.2014.04.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the increasing growth of the digital medical image data, more image processing technology is needed to implement compressive storage. However, current image compression methods do not consider the characteristics of medical image. Aiming at this problem, a method of medical image compression based on fast sparse representation is proposed. It uses the K-Singular Value Decomposition(K-SVD) algorithm to construct an over-complete dictionary for sparse representation, and uses the Batch Orthogonal Matching Pursuit(Batch-OMP) algorithm for sparse coding. Only the coefficients information in the nonzero position of sparse coding is needed to be stored for recovering the original medical images perfectly with the over-complete dictionary. Experimental results show that the proposed method can speed up about 40% compared with Orthogonal Matching Pursuit(OMP) when performing image compression. Furthermore, the results of image reconstruction show that the proposed method increases the Peak Signal to Noise Ratio(PSNR) of the compressed images by an average of 18% and 50% compared with Joint Photographic Experts Group(JPEG) algorithm and Set Partitioning In Hierarchical Trees(SPIHT) algorithm respectively, indicating that the proposed method has a better performance than JPEG and SPIHT.
  • WANG Chang, GU Xing-fa, YU Tao, XIE Yong, XIE Yan-hua, LI Xiao-ying
    Computer Engineering. 2014, 40(4): 237-241,246. https://doi.org/10.3969/j.issn.1000-3428.2014.04.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    At present, with more and more high resolution remote sensing satellites launched in the world, the application demands for high resolution remote sensing images are increased. Meanwhile, the quality of remote sensing images has direct impacts on the reliability and accuracy of applications. Therefore, it is very necessary to restore high resolution image. According to the characteristics of high spatial resolution of ZY-3 satellite Charge-coupled Device(CCD) image, this paper analyzes on-orbit measurement methods of the system Modulation Transfer Function(MTF) only with the image, and determines that double edge method is feasible. In addition, this paper tries to determines the double edge linear objects which are used for the double edge method on the basis of experiment. At the same time, it is proved that the construction method of 2D MTF matrix is derived. With the 2D MTF matrix, the origin image is restored by Modified Inverse Filter(MIF). The test results show that a significant improvement of the image quality is achieved.
  • ZHANG Lun, YU Tao, GU Xing-fa, HU Xin-li, ZHAO Li-min
    Computer Engineering. 2014, 40(4): 242-246. https://doi.org/10.3969/j.issn.1000-3428.2014.04.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a new way to study the effect of scale change on image attribute based on ray tracing technology. A series of multi-resolution images are simulated through the feature model selection and scene design. The results of entropy, autocorrelation coefficient, coverage rate indicate that color is unrelated to scale; distortion occurs when image resolution reduces excessively meanwhile little changes when resolution improves to a certain level; coverage rate decreases as the resolution decreases within the distortion range. The proposed method has the advantage that it is unaffected by the error of resample method compared with tradition methods.
  • ZENG Dong-mei, CHEN Duan-sheng
    Computer Engineering. 2014, 40(4): 247-251,257. https://doi.org/10.3969/j.issn.1000-3428.2014.04.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A video image cartoonlization algorithm based on shock filter is proposed for the problem that the specialty and complexity of the traditional two-dimensional cartoon production lead to the low participation of users. The improved shock filter is to cluster the color and eliminate the noise of the color video image, and the Gaussians difference operator is used to detect edges of the image which is filtered by shock filter. Then, color quantization is applied to the image which has been filtered by shock filter. The edge curves and the quantitative image are fused to generate a personalized cartoon image. Experimental results show that compared with the algorithm for cartoon-like stylization of image based on the bilateral filter, shock filter proposed by Osher, shock filter proposed by Alvarez and improvement of the shock filter proposed by Osher, this algorithm can produce stronger visual distinctiveness and higher fidelity of cartoon image with more clearly, more complete, more smooth and more continuous edge curves, and it can generate personalized cartoon video automatically through the stylization conversion of a series of images.
  • CUI Yun-xiang
    Computer Engineering. 2014, 40(4): 252-257. https://doi.org/10.3969/j.issn.1000-3428.2014.04.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Video annotation refers to the use of semantic indexing information to annotate the content of the video, and its purpose is to facilitate the retrieval of video. Current video annotation works utilize low-level visual features, which are hard to be used to annotate directly human professional action in sports video. To solve this problem, the 2D human joint features are used and a professional action knowledge base is built to annotate sports video. The method employs dynamic programming to compare the difference between human sports actions in two videos and combines a co-training learning approach to realize the semi-automatic labeling process. It is tested with tennis videos, and experimental results demonstrate that the labeling accuracies reach to 81.4%. Compared with the existing algorithm of professional action annotation, the accuracy of action annotation is increased by 30.5%.
  • ZHANG Teng, SHI Zheng, LIAO Hai-tao
    Computer Engineering. 2014, 40(4): 258-261,268. https://doi.org/10.3969/j.issn.1000-3428.2014.04.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To address yield loss from random defects, a new Multi-project Wafer(MPW) floorplanning algorithm is proposed. This proposed algorithm can increase production margin as well as reduce fabrication loss caused by random defects by introducing defect models, and simulated annealing process is modified to escape locally optimal solution traps under Reticle Size Constraint(RSC). By carrying out industrial cases, the proposed algorithm can search solution space and find global optimal solution by accepting interim floorplanning results, and floorplanning result illustrates proposed algorithm increases total chip production margin by 137%, and reduces required wafer number to obtain enough volume of chip for each integrated circuit design by 25%, compared with existing algorithms.
  • XU Tai-long, XUE Feng, CAI Zhi-kuang, ZHENG Chang-yong
    Computer Engineering. 2014, 40(4): 262-268. https://doi.org/10.3969/j.issn.1000-3428.2014.04.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    All digital delay-locked loops play an important role in modern day very large scale system-on-chips, which are widely used to solve the problems of clock skew and clock generation. Conventional all digital Successive Approximation Register Delay-locked Loop (SARDLL) have problems of harmonic lock, dead-lock and lock time longer than the theoretical value. To solve these problems, a wide-range all digital SARDLL which has no harmonic lock, dead lock and has theoretical lock time is proposed, by improving the circuit structure of conventional successive approximation register and adopting the resettable digital-controlled delay line. Based on the SMIC 0.18 μm CMOS, a 6 bit improved all digital SARDLL is implemented. The transistor-level post-layout simulation results show that the longest lock time is 6 input clock cycles and the proposed SARDLL is validated.
  • LEI Hai-jun, YANG Zhong-wang
    Computer Engineering. 2014, 40(4): 269-272,276. https://doi.org/10.3969/j.issn.1000-3428.2014.04.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    High Efficiency Video Coding(HEVC) is the latest video compression standard for high resolution video content. This paper analyzes the computational complexity of coding unit and prediction unit algorithm. A fast algorithm for intra prediction modes decision is proposed. This algorithm contents the relevance of texture complexity of coding unit, sets the rational threshold to select the size of coding unit fast, and reduces the number of candidate modes for improving intra prediction algorithm by the three-step search method. Combining the algorithm of coding unit and prediction unit, the final simulation results show that, compared with the recent HEVC conference software HM8.0, the proposed algorithm can reduce about 40.9% encoding time while suffers from negligible on bit rates performance.
  • MA Yuan-kun, LIANG Yong-quan, LIU Tong, ZHAO Jian-li, LI Yu-jun
    Computer Engineering. 2014, 40(4): 273-276. https://doi.org/10.3969/j.issn.1000-3428.2014.04.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the practical application of collabrative filtering technology, this paper proposes a method which combines the transfer learning technology and clustering technology to solve the cold start problem of new system. This method uses spearman rank correlation to measure the similarity between two users, and takes use of expectation maximization algorithm to cluster the users of source dataset into several clusters. For different cluster, N items who have the higher average score are selected as this cluster’s recommendation list. For users of target dataset, calculate the clusters belong to the users and membership of the clusters. It recommends the recommended list of the cluster according to the membership in proportion. Experimental results show that the algorithm is more available to solve the cold start problem of new system than the TAM algorithm and CR algorithm.
  • WANG Yi-qing, CHEN Shu-qiao, MA Hai-long
    Computer Engineering. 2014, 40(4): 277-280,286. https://doi.org/10.3969/j.issn.1000-3428.2014.04.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The current research on flow statistics information focuses primarily on flow sampling, which ignores full-flow statistics. A method of optimized designed Counting Bloom Filter(CBF) used for flow statistics is proposed. According to counter overflow and growth of positive error as a result of data increasing, scheme of dynamic statistics and multiple counter statistics in coordination are separately proposed. Its summary storage structure is easy to be inquired, and data structure of CBF can be easily implemented in hardware. Experimental results show that the time complexity of CBF used for flow statistics is lower than the traditional Hash method, which can be used in fast full flow statistics in network applications.
  • FENG Qiang, HU Yi, YU Dong, LU Xiao-hu
    Computer Engineering. 2014, 40(4): 281-286. https://doi.org/10.3969/j.issn.1000-3428.2014.04.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to meet the application needs of embedded multi-core, high-speed and high-precision numerical control system, owing to the high latency, low communication capacity of current multi-core communication, this paper studies the design of embedded numerical control system based on ARM and DSP dual-core architecture, designs and implements a multi-core data communication mechanism based on the numerical control system platform. The communication mechanism is based on shared memory, including hardware driver realization, memory division, communication synchronization and the establishment of a shared cache pool and communication protocol. It completes the measurement of dual-core data transmission latency and data transmission capacity, which affects system performance, and carries out the application test. The results prove that this design can meet the performance requirements of 2 MB data communications volume and 20 ms communication delay of the embedded ARM and DSP dual-core architecture numerical control system.
  • CHEN Xi, JIA Ke-bin, WANG Si-wen
    Computer Engineering. 2014, 40(4): 287-290,294. https://doi.org/10.3969/j.issn.1000-3428.2014.04.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at improving the precision of shot boundary detection, a new algorithm based on mutual information is proposed. The algorithm adopts the mutual information, which is calculated based on non-uniform block histogram in HSV space, as the difference measurement between frames. Combined with the corresponding threshold strategy and the time’s window strategy, the cut and gradual change shots and shot boundaries made by computer effects technology can be successfully detected. Experiments conducted on many kinds of videos such as advertising, variety shows and news video show that, compared with the original twin-threshold method, the proposed algorithm enhances the detection results by 11.9% in cut shots and 7.6% in gradual shots. The algorithm is robust for camera movement and the light changes on shot detection, with higher recall rate and precision rate.
  • MAO Wei-min
    Computer Engineering. 2014, 40(4): 291-294. https://doi.org/10.3969/j.issn.1000-3428.2014.04.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a novel nonlinear feedback controller in order to achieve the stability of the complex power networks with stochastic perturbations, which may cause a series of large-scale blackouts and great loss to the national economy. It studies the robust control of the power networks based on the theory of complex networks, designs the nonlinear feedback controller based on the sign function and the absolute value function. According to the Lyapunov stability theory, the robust stability of the complex power networks is verified with stochastic perturbations under the control of the nonlinear feedback controller. In the simulations part, the Loren system is used and the network structure satisfies scale-free characteristic. Numerical simulations are exploited to show the effectiveness of the theoretical results. The robust synchronization can be effectively implemented in the nonlinear controller.
  • SUN Gong-jin, AN Hong, FAN Dong-rui
    Computer Engineering. 2014, 40(4): 295-300,304. https://doi.org/10.3969/j.issn.1000-3428.2014.04.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Motion Estimation(ME) in the video coding is the most complex and time-consuming one of all the processing stages. This paper extracts all the ME modules from multiple popular open source video codecs in order to evaluate and optimize their performance. In addition, a comprehensive input data set is constructed for these ME algorithms considering different video contents and resolutions. A quantitative analysis of runtime efficiency and microarchitecture characteristics are made for these algorithms by means of the profiling tool based on hardware performance counter, and the analysis exposes their performance difference on current mainstream processor architecture. The evaluation results show that for the input of complex and high-resolution video, the ME will consume the most time, while there are little difference between their low Instruction Level Parallelism(ILP). But the Last Level Cache(LLC) miss rate and branch misprediction rate of these algorithms are all rather low, which are respectively under 0.01% and 7%.
  • BAO Hong-ping, ZHU Xiao-dong, ZHU Jian-liang, WANG Wei, ZHANG Jie
    Computer Engineering. 2014, 40(4): 301-304. https://doi.org/10.3969/j.issn.1000-3428.2014.04.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the magnetic compass error compensation problem under hard and soft magnetic interference, this paper proposes a magnetic sensor error compensation algorithm based on ellipse rotation. It analyzes the reason of generating magnetic compass error and builds the elliptic rotating model, uses nonlinear least-square fitting algorithm to derive the error compensation parameter formula, and verifies the error compensation algorithm using the measurements of Honeywell biaxial reluctance sensor. Experimental results show that, compared with the traditional elliptic error compensation model, the maximum course angle error reduces from 2.0° to 0.4° by using the proposed elliptic rotation error compensation model.
  • LI Jin-wen, AN Bo-wen
    Computer Engineering. 2014, 40(4): 305-308. https://doi.org/10.3969/j.issn.1000-3428.2014.04.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The massive data transmission is one of the major factors which restrict the efficiency of Digital Signal Processor(DSP) system. An optimization of locating the center of fiber algorithm based on DM642 is proposed to improve the efficiency of data transmission in the process of locating the center of the fiber at the end of the image-carrying fiber bundles imaging system. In memory configuration phase, this paper sets the mode of the L2 cache/SRAM and prefetching off-chip data to on-chip SRAM. In data transmission phase, designs dynamic offsets of the data storage which can adapt iteration and reduce repeated read. Experimental results show that this method can locate the center of fiber, whose time consumption is reduced by 1/4, and improve the efficiency of the system.
  • GAO Hong-feng, SHAO Hong-xiang, HU Jun-hong
    Computer Engineering. 2014, 40(4): 309-312. https://doi.org/10.3969/j.issn.1000-3428.2014.04.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Belief Propagation(BP) algorithm of LT codes not only has high complexity but also has oscillation effect due to present of short loop in the tanner graph. To solve the two problems, a new oscillating iteration algorithm based on soft-bit domain is proposed. The hyperbolic tangent function is transformed and quantified to soft-bit domain( 1,1). The information update algorithm of variable nodes is transferred to soft-bit domain. In LT codes, some outer information of variable nodes exists oscillating effect due to short loops, and the decoding performance is affected badly. A new criterion is presented to judge the oscillate effect. When the sign of variable node flips between two adjacent iterations and the soft-bit values are higher than a given threshold, the oscillate effect exists. The simulation results show that the proposed algorithm gets 75% lower than the BP algorithm in the amount of computation and the bit error rate performance of soft-bit domain decoding algorithm is very close to the traditional BP algorithm.
  • YANG Fang-ping
    Computer Engineering. 2014, 40(4): 313-317. https://doi.org/10.3969/j.issn.1000-3428.2014.04.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Synchronization is the basis of research and application about multi-channel redundancy. Based on multi-channel and static scheduling algorithm, this paper presents a dynamic synchronization time control algorithm, which endows more time to the fault unit, then the interfere eliminates in maximum. Furthermore, a dynamic synchronization voting unit is proposed, with that, the time is controlled and the voting data is monitored. Theoretical analysis and experimental results show that the dynamic synchronization voting algorithm extends the task time window in redundancy system, and heightens withstand of transient fault, and boosts the reliability of redundancy system.
  • WANG Zhen, LI Ren-fa, LI Yan-biao, TIAN Zheng
    Computer Engineering. 2014, 40(4): 318-320. https://doi.org/10.3969/j.issn.1000-3428.2014.04.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Concerning the classical Threaded Hash Trie(THT) tree algorithm, a parallel multiple pattern matching on Chinese/English mixed texts algorithm is proposed for the accuracy of mixed Chinese and English text matching and the low efficiency of large-scale data text matching. The program splits the text into a number of small texts, and runs THT algorithm to match them. It is further accelerated by parallelization of pretreatment process and new storage structure. Experimental results indicate that the method is correct and more effective than classical algorithm, and can get more than 8 times speedup ratio when the data scale reaches 226.