Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 March 2014, Volume 40 Issue 3
    

  • Select all
    |
  • WANG Yun-feng, ZHANG Bin
    Computer Engineering. 2014, 40(3): 1-5. https://doi.org/10.3969/j.issn.1000-3428.2014.03.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In the Radio Frequency Identification(RFID) system, in order to solve data collision problems between multiply tags existing within the scope of the reader during identification process, based on the analysis of ALOHA algorithm and the application of Code Division Multiple Access(CDMA) technology, an anti-collision ALOHA algorithm Based on Spread Spectrum(ABSS) is proposed. The throughput of this algorithm decreases along with the increase of frame’s transmission delay and increases as the number of spread codes increase. When the number of spreading codes is equal to the load, the minimum throughput of this system is gotten. When the number of spreading codes is larger than the load, the throughput will increase with the number of spreading codes, so the throughput of system will be higher than that of slot ALOHA. CDMA-based communication system model of multi-tag and reader is established on Simulink to respectively study influence of the signal-to-noise ratio, the uplink rate and frame size on the communication error rate. Experimental data show that communication quality between the reader and tags is reliable, and in reality, the error rate is close to zero.

  • ZHU Qing
    Computer Engineering. 2014, 40(3): 6-11. https://doi.org/10.3969/j.issn.1000-3428.2014.03.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Since the sensor nodes prone to the software or mechanical failure, this may cause the permanent data loss of some or all of the data. In order to ensure the reliability of data collection in Wireless Sensor Network(WSN), this paper proposes an improved data online recovery scheme. The data of nodes are redundant processed by using the distributed storage mechanism, and based on the recovery request sent by the sensors, a constant approximation competitive ratio polynomial time data recovery algorithm is proposed, which can have the low complexity competitive ratio when admission control is allowed, and the size of recovery pieces is constrained, and achieves the constant approximation ratio bound in acyclic networks. Analysis and simulation experimental results show that the proposed algorithm can recover network data exactly and attain minimum recovery cost for any recovery request.

  • SHI Da-wei, YUAN Tian-wei
    Computer Engineering. 2014, 40(3): 12-17,22. https://doi.org/10.3969/j.issn.1000-3428.2014.03.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Combination of coarse-grained and fine-grained Dynamic Taint Analysis(DTA) is developed to take speed and accuracy both into account. By comparing the realization process of coarse-grained DTA and fine-grained DTA, this paper proposes a new analysis framework. It executes online coarse-grained DTA to filter useful instruction, uses offline fine-grained DTA to calculate taint information. Coarse-grained and fine-grained taint mark methods are established respectively by comparing the difference of taint analysis. Data-flow property strategy and control-flow property strategy are developed under the condition of coarse-grained DTA and fine-grained DTA. As a transfer file, offline track record structure is designed to provide necessary information for fine-grained analysis. A prototype system is implemented and the experimental result proves that this method can ensure the rapid collection of taint information through online coarse-grained mode, and use offline fine-grained mode to improve the accuracy with accepted time consumption.

  • ZHENG Zhao-xia, LI Yi-fan, YU Liang, TIAN Yuan, LIU Zheng-lin
    Computer Engineering. 2014, 40(3): 18-22. https://doi.org/10.3969/j.issn.1000-3428.2014.03.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Trojan circuits can bypass traditional defensive techniques as they occupy a layer below the entire software stack. This paper proposes a hardware trojan detection technology based on probabilistic signature. Based on logic detection technology, using random and hypothesis algorithm, this paper constructs the probability signature of circuits(Boolean functions), as the unique identifier template. When the signature of circuit under test does not match the template, an alarm is launched. It designs two circuits that implement full adder and AES encryption, and then they are implanted with common hardware Trojan. It makes in-depth theoretical analysis and research on whether the probabilistic signature of the circuits implanted with hardware Trojans is changed in comparison with the two kinds of original circuits. It tests the circuits based on FPGA platform via probabilistic method. As a result, it is verified that based on the probability signature, it can easily achieve a 95% level of confidence on the detection of hardware Trojans implanted into the combinational logic circuits.

  • WANG Na, WEI Bo, WANG Jin-dong, ZHANG Heng-wei
    Computer Engineering. 2014, 40(3): 23-27,38. https://doi.org/10.3969/j.issn.1000-3428.2014.03.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    With the explosive number growth of services in cloud computing environment, how to select the services that can meet user’s requirement from the services which have same or similar function becomes the key problem to be resolved in cloud computing. So a multi-objective service composition optimization model with Quality of Service(QoS) restriction is built, and since some disadvantages of the traditional Multi-objective Particle Swarm Optimization(MOPSO) algorithm, such as less diversity of solutions and falling into local extremum easily, a method of Chaotic MOPSO(CMOPSO) algorithm is proposed. This algorithm uses the information entropy theory to maintain non-dominated solution set so as to retain the diversity of solution and the uniformity of distribution. When the diversity of population disappears, it introduces chaotic disturbance mechanism to improve the diversity of population and the ability of global optimization algorithm to avoid falling into local extremum. Experimental result shows that the astringency and the diversity of solution set of CMOPSO algorithm are better than traditional MOPSO algorithm, and it can solve the problem of service dynamic selection under cloud computing environment more efficiently.

  • MO Zhao, WEI Yong-zhuang
    Computer Engineering. 2014, 40(3): 28-32,45. https://doi.org/10.3969/j.issn.1000-3428.2014.03.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Recently, LBlock as a new lightweight block cipher is presented. By using both the structure of LBlock algorithm and the basic idea from the cube test, two neutral-bit detection algorithms are proposed. It is shown that all master key bits are involved in the each output of 9-round reduced LBlock cipher. Moreover, for given 18 cube variables, there still is nonexistence of key neutral-bit for 11-round reduced LBlock cipher. Research result shows that the full-round LBlock cipher has good key bits confusion against the classical cube attacks.

  • LUO Jun, CHEN Xi-lin, LI Wen-sheng
    Computer Engineering. 2014, 40(3): 33-38. https://doi.org/10.3969/j.issn.1000-3428.2014.03.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most of the traditional caching systems are based on memory storage in order to achieve higher performance, and their data persistence is not perfect. So these systems may be limited to memory capacity. Also they may lose all the data and be impossible to restore when systems break down. After analyzing the traditional caching systems, this paper applies the Log Structured Merge-Tree(LSM-Tree) theory and Merge-Dump storage engine to improve their data persistence, and then implements a distributed Key-Value persistent caching system Sorted Set DB(SSDB) by referencing the stand-alone persistent storage system LevelDB of Google. It combines SSDB with advantages of traditional caching systems, consistent Hashing, Bloom filter and so on to optimize its performance. It tests the performance of SSDB, and the results show that because of pure memory storage, SSDB can effectively reduce the cost of data storage, and it has just a slight decrease of 600 Query Per Second(QPS) in reading and writing performance compared with Redis.
  • GAO Jun, HAO Zhong-xiao
    Computer Engineering. 2014, 40(3): 39-45. https://doi.org/10.3969/j.issn.1000-3428.2014.03.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at nearest neighbor query for uncertain object with range constraint in network, the concept of nearest neighbor query for fuzzy object in constraint network is put forward. According to the query sequence, two query algorithms are proposed: NN-R and R-NN. In the way of storage, the two algorithms all use the index structure that spatial location information separated from network space connection information, and they use clustering file organization to reduce I/O operation. NN-R algorithm reduces the search area by using minimum and maximum α-distance between fuzzy object and range constraint. R-NN algorithm uses Euclidean nearest neighbors as the candidates. It declines the complexity by using that Euclidean distance can be computed easily and is the bottom boundary of the network distance. The complexity of the algorithms is respectively O((logm1|E|+(|V*|m3+1)logm2|V|+|E|+|V|log|V|+n(lgn+1)) and O(logm4n+ (k+1)logm1|E|+|E|+|V|log|V|). Experimental results show that the two algorithms all have better performance in the condition of their own applied area.
  • LI Xu, CAO Lei, FU Lei
    Computer Engineering. 2014, 40(3): 46-50. https://doi.org/10.3969/j.issn.1000-3428.2014.03.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the search results are too huge and complex in the application of large-scale social networks, the method of combining personalized recommendation with visualization is proposed to find useful information from mass data. This paper proposes a key information display algorithm on the basis of Pathfinder Networks(PFNET) algorithm to display key information by emphasizing important records with high degree of associations. Additionally, an improved weighted force-directed algorithm is applied to cluster user relations to improve displaying layout for the purpose of facilitating users’ cognition and achieving personalized recom- mendation. It designs and implements a visualization of personalized recommendation tool HRVis. The Movielens datasets shows that the HRVis can emphasize important users which have good social relations and associated users which are similar to the users, and it has good visual recommendation effect.
  • ZHANG Shu-yun, ZHANG Shou-zhi
    Computer Engineering. 2014, 40(3): 51-54. https://doi.org/10.3969/j.issn.1000-3428.2014.03.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the uncertain data, traditional method of judging whether an itemset is frequent cannot express how close the estimate is, meanwhile frequent itemsets are large and redundant for large datasets. Regarding to the above two disadvantages, this paper proposes a mining algorithm of frequent closed itemsets based on uncertain data called UFCIM to mine frequent closed itemsets from uncertain data according to frequent itemsets mining method from uncertain data, and it is based on level mining algorithm Apriori. It uses probability of confidence to express how close the estimate is, the larger that probability of confidence is, the itemsets are more likely to be frequent. Besides as frequent closed itemsets are compact and lossless representation of frequent itemsets, so it uses compacted frequent closed itemsets to take place of frequent itemsets which are of huge size. Experimental result shows the UFCIM algorithm can mine frequent closed itemsets effectively and quickly. It can reduce redundancy and meanwhile assure the accuracy and completeness of itemsets.
  • YIN Shao-hong, FAN Gui-dan
    Computer Engineering. 2014, 40(3): 55-58,75. https://doi.org/10.3969/j.issn.1000-3428.2014.03.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The past algorithms produce large amounts of redundant itemsets, and they affect the efficiency of data mining. Therefore, a Top-k frequent itemsets mining algorithm over data streams based on matrix is proposed. Two 0-1 matrices, transaction matrix and 2-itemsets matrix, are introduced into the algorithm. Using transaction matrix to express the transaction list of a sliding window, and 2-itemsets matrix is obtained by calculating the support of each row. Then it can get candidate items by 2-itemsets matrix, and Top-k frequent itemsets are obtained by calculating the support of candidate items through logic and operation of correspond row in transaction matrix. Finally it saves the result of data mining into data dictionary. The algorithm can output the Top-k frequent itemsets by support in descendant order when user queries. Experimental results show that the algorithm avoids redundant itemsets in the process of data mining, and the efficiency of data mining is improved appreciably under the premise of accuracy.
  • SUN Li, HE Gang, LI Ji-yun
    Computer Engineering. 2014, 40(3): 59-62,81. https://doi.org/10.3969/j.issn.1000-3428.2014.03.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of that traditional Extract, Transform, Load(ETL) tools face the efficient problem of the massive fact data in data warehouse, two algorithms about parallel processing facts are designed and implemented based on Hadoop platform. From the two perspectives of surrogate key lookup of fact table and aggregation for fact data on the different granularity, a multi-way parallel lookup algorithm on slowly changing dimensions and an algorithm of aggregation for fact data on the different granularity are presented. The first algorithm considers slowly changing dimensions and big dimensions synthetically. In order to solve the problem of out of memory, the algorithm adopts an approach to the distributed cache to copy small dimensions to every date nodes’ memory. And implementing multi-way lookup of dimension keys in the stage of map is to avoid network delay result from data transmission. The second algorithm adds merge stage after reducing stage, so it is beneficial to solve the aggregation problem of the fact data according to different granularity effectively. Experimental results show that the two algorithms have better efficient than Hive data warehouse with respect to the problem of parallel processing facts data in data warehouse.
  • BAO Lin, NIU Jun-yu, ZHUANG Fang
    Computer Engineering. 2014, 40(3): 63-66.87. https://doi.org/10.3969/j.issn.1000-3428.2014.03.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the problem that the recommendation system is vulnerable to the impact of Spammer attack, which leads to the inaccuracy of the final item rating, this paper proposes a user reputation ranking algorithm based on median. The algorithm readjusts the weight of user’s rating by measuring user’s reputation. On the other hand, according to the median, it has the property of less susceptible to the effects of extreme rating, the algorithm selects the median from the distances between user rank and object rank as the criterion to decrease user reputation, then iterates until convergence to adjust the user reputation and final rating. Operation result of multiple real data sets shows that the algorithm obtains a more reasonable reputation distribution and a higher accuracy, and after preprocessing by this algorithm, the rating data can get a better Root Mean Square Error(RMSE) value on SVD++.
  • YAN Yi-ming, GUO Xin
    Computer Engineering. 2014, 40(3): 67-70,92. https://doi.org/10.3969/j.issn.1000-3428.2014.03.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to deal with problems in true environment caused by data mining tasks with larger amount of data, complex processing and intensive computing, improve the traditional tree incremental updating mining efficiency, and change the existing algorithm of serial implementation methods, this paper proposes a dynamic tree incremental updating method on the basis of Hadoop. It introduces concepts concerning cloud computing, the cloud model, operating process and so on. Then, according to the Hadoop platform task scheduling random distribution strategy, a new dynamic cloud platform resource allocation algorithm is put forward in order to minimize the consumption cost. It designs a new tree incremental updating algorithm on the basis of cloud platform, and two parallel algorithms (DeleteFreqTree, FindNewTree) are proposed. Large number of experiments show that the paralleled algorithm is feasible, highly efficient, expandable, and the algorithm can mine mass tree data effectively.
  • HU Ping, WANG Zhong-qun, LIU Tao, CHEN Ying, HUANG Shao-wei
    Computer Engineering. 2014, 40(3): 71-75. https://doi.org/10.3969/j.issn.1000-3428.2014.03.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Power companies need to solve the problem that how to improve the capabilities of data sharing and interoperation between heterogeneous applications in smart grid. Some solutions based on data model standardization or SOA are inadequate in terms of runtime modules hotplug, invasion of distributed programming model and continuous data change supporting. This paper decouples a power application software into data bus and data plugs from the perspective of electric data and software architecture, proposes a general electric data platform based on distributed Open Service Gateway initiative(OSGi), and then discusses the topological structure of data platform, distributed OSGi extension model and electric metadata model. The implementation approach of data platform in Fujian power grid is presented, and some tests are done for typical modules’ functionality and concurrent performance. The results show that the platform can reduce the difficulty of data sharing and interoperation between heterogeneous applications effectively.
  • ZHANG Ying-nan, GU Nai-jie, PENG Jian-zhang, WANG Guo-peng, WEI Zhen-wei
    Computer Engineering. 2014, 40(3): 76-81. https://doi.org/10.3969/j.issn.1000-3428.2014.03.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes an efficient method at the kernel level to solve the problem how to maintain the session in multi-process load balancing. In the wakeup-callback mechanism of epoll, this method wakes up a certain service process selectively to process the packet and accept this connection according to the source address hash algorithm. It hopes that the requests sharing the same IP address are responded by the same service process. So the requests from the same client are forwarding to the same backend server according to the session information kept in this load balancing process. This paper also devotes to optimize the performance of the procedure of receiving and sending packets. It is an intended way to keep this whole process on the same CPU core to reduce the refresh rate of the CPU cache. Experimental results reflect that the method in this paper achieves the objective to keep the session and also increases cps by 12% and throughput by 4.6% which is based on multi-queue NIC and multi-core processor.
  • WANG En-dong, LIANG Zhi-cheng, ZHANG Yu
    Computer Engineering. 2014, 40(3): 82-87. https://doi.org/10.3969/j.issn.1000-3428.2014.03.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Currently, cloud storage becomes the focus of the IT field, and there is a wide range of needs in the field of telecommuni- cations and streaming media. Thin provisioning is an advanced storage virtualization technology which can improve storage space utilization, and simplify storage infrastructure, so thin provisioning can meet the needs of cloud storage infrastructure facilities. Aiming at the problem that thin provisioning has low efficacy currently, this paper designs and implements a I_THINP thin provisioning system. It is based on the background of the streaming media industry, and it uses modular, hierarchical design idea. It improves space useage efficient of thin provisioning and reduces the negative effect of space reclamation for system performance by combination of file system, general block device, iSCSI module and streamline pool module. Experimental result shows that I_THINP thin provisioning system space reclamation percentage is more than 97%, Input/Output Operations per Second(IOPS) is decreased by 16.53%, I/O average response time is decreased by 22.82%, compared with other thin provisioning products, and proves that I_THINP thin provisioning system achieves the advanced level in the industry in terms of performance.
  • WU Jie1,2, KAN Wen-xiao, DU Ran, LI Sha, WU Wen-jing, Andrei Tsaregorodtsev, CHEN Gang
    Computer Engineering. 2014, 40(3): 88-92. https://doi.org/10.3969/j.issn.1000-3428.2014.03.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a desktop grid architecture is proposed with virtualization technologies to solve the problems of porting cross platform applications, discrepant results from heterogeneous platforms and system scalability. Based on the isolation and encapsulation of virtualization technology, with application-oriented job scheduling policy, a finite virtual machine controlling method, scalable and easily- deploying desktop grid architecture is achieved. The test results demonstrate that this new architecture well applies to large-scale scenarios, and the application-oriented scheduling policy and virtual machine control method are effective and feasible as well.
  • ZHANG Xue-jun, YAN Guang-hui, HU Xiao-hui
    Computer Engineering. 2014, 40(3): 93-98. https://doi.org/10.3969/j.issn.1000-3428.2014.03.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to address the problem of traditional distribution simulation framework HLA/RTI with respect to poor load balancing performance in service dynamic scheduling and low reliability of service execution, this paper proposes a Context-aware Simulation Service Scheduling Model(C3SM), which includes general framework, scheduling strategy and service deployment. The framework provides the functions and interaction interfaces of each module. For the scheduling scheme, a modified ant colony algorithm is used to achieve optimum load balancing and system reliability. Moreover, the overlapped coverage deployment strategy is adopted to obtain the high service availability and low resource consumption in the service deployment. Experiments are carried out on performance comparisons between the traditional HLA/RTI and C3SM in the scheduling strategy and the reliability of service performing, the results show that C3SM can obtain good load balance with the real-time context information of the execution environment, and the overlapped coverage simulation service deployment scheme greatly improves the reliability of the simulation execution system.
  • LIU Lei, LI Jing, CHEN Li, FENG Xiao-bing
    Computer Engineering. 2014, 40(3): 99-102,112. https://doi.org/10.3969/j.issn.1000-3428.2014.03.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Speculative parallelization is an important technique for solving the problem of parallelizing the legacy codes. However, the original runtime systems of speculative parallelization are confronted with serious performance problems, such as load unbalance, frequent communication, high conflict costs and frequent process creation/destruction overheads. This paper proposes a process-based speculative runtime system, using Implicit Single Program Multiple Data(ISPMD) method to partition and execute tasks in parallel, and implement a reused-process and delegated correctness checking method to reduce the overheads of frequent creation/destruction of speculative processes or the communication between them, which can improve the utilization of speculative processes. Besides, through a method that speculative tasks cooperate with manager task, the system can ensure the correct execution, even in the presence of exceptions with speculative processes. According to experimental results, the process-based speculative runtime system achieves 231% speedup improvement compared with other process-based speculative parallel systems.
  • HU Xin-ming, SHENG Chong-chong, LI Jia-jia, WU Bai-feng
    Computer Engineering. 2014, 40(3): 103-107,119. https://doi.org/10.3969/j.issn.1000-3428.2014.03.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    MPI+CUDA are the mainstream programming models of current GPU cluster architecture. However, by using such a low level programming model, programmers require detailed knowledge of the underlying architecture, which exerts a heavy burden. Besides, the program is less portability and inefficient. This paper proposes StreamMAP, an automatic task assignment system on GPU clusters. It provides powerful, yet concise language extension suitable to describe the compute resource demands of cluster tasks. It develops a run time system to maintain resource information, and supplies an automatic task assignment for GPU cluster. Experiments show that StreamMAP provides programmability, portability and scalability for GPU cluster application.
  • XU Yao, JIANG Pan-pan, WANG Da-ming
    Computer Engineering. 2014, 40(3): 108-112. https://doi.org/10.3969/j.issn.1000-3428.2014.03.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Multiple Input Multiple Output(MIMO) system performance depends on the correlation of the channel, Spatial Multiplexing (SM) is applicable to low correlation channel, and Beamforming(BF) is applicable to high correlation channel. This paper proposes an adaptive MIMO receiving method based on eigenvalue distribution according to the channel correlation. The proposed method can dynamically select the receiving mode based on a uniform circular antenna array. Under the conditions of large Angular Spread(AS) environment, it selects the conventional MIMO receiving mode(called antenna MIMO), and under the conditions of small AS, multipath independent environment, it selects the multi-beam receiving mode(called Beam-MIMO). The impact of AS and multipath on eigenvalue distribution is analyzed, and an adaptive MIMO receive switching criterion is given. Simulation results show that the proposed method can adapt to the complex wireless environment and provide better error rate performance.
  • LI Hui, LIU Shu-ji
    Computer Engineering. 2014, 40(3): 113-119. https://doi.org/10.3969/j.issn.1000-3428.2014.03.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of unnecessary energy consumption caused by periodic clustering and the load balance problem in the Wireless Sensor Networks(WSN), an Unequal Clustering routing algorithm based on node Degree and Distance for WSN(UCDD) is proposed. UCDD algorithm adopts the time competitive mechanism in the first round of clustering. The length of time depends on the nodes’ node degree and the distance from the base station, and different clusters are formed according to the different radius of competition. After the first round, the clusters’ structure does not change any more. Cluster head dynamically choose next cluster head according to the residual energy and the communication costs. Inter-cluster multihop routing is used in UCDD algorithm, and the relay node is selected according to node residual energy, distance from the base station, communication cost of nodes and relay hot. Simulation results show that the algorithm can prolong the networks lifetime by more than two times compared with LEACH protocol and by 13.97% compared with EEUC protocol. Besides, it balances the energy dissipation of the networks.
  • YE Jin, LIU Jian-tao, LIN Jing, LI Tao-shen
    Computer Engineering. 2014, 40(3): 120-122. https://doi.org/10.3969/j.issn.1000-3428.2014.03.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The proportional fair scheduling algorithm in multimedia service schedule does not meet the various Quality of Service(QoS) needs. Especially, when the business instantaneous QoS parameter values are close to the business accepted maximum QoS thresholds, the variation tendency of the scheduling priority of the business is not obvious, and the business can not be timely scheduled and the quality of multimedia business communication is reduced. According to this instance, this paper draws the QoS factor parameters into the algorithm of PF scheduling priority judgments expression. It enhances the impact of scheduling with the demand for QoS parameters. It proposes a proportional fair scheduling algorithm based on the QoS utility function. Experimental results show that the scheduling algorithm can quickly increase scheduling opportunities closed to the multimedia business of the service quality thresholds. Therefore, the delay of the VoIP business is reduced by 44% and the justice of the VoIP business is raised by 3%.
  • WANG Yong-jun, PENG Hua
    Computer Engineering. 2014, 40(3): 123-126,131. https://doi.org/10.3969/j.issn.1000-3428.2014.03.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A data aided and decision-directed union timing recovery method for the synchronization problem of Continuous Phase Modulation(CPM) is presented. Based on Pulse Amplitude Modulation(PAM) representation of CPM signals, Timing Error Detector(TED) and sequence detector are got from the method of likelihood function. And a Phase Lock Loop(PLL) that is used for timing error estimation is constructed in the system, timing acquisition is achieved by using the data aided and data reuse method, by using decision-direct method to track the timing error. Simulation experimental results show that the algorithm can overcome the problem of unlock that existed in decision-directed method when the timing error is large, and achieve timing recovery in a wide range of timing error. And it reduces the number of match filter which is used in timing estimation and detection. For the CPM burst signal in DVB-RCS2 standard, it can achieve good timing estimation by using three match filters. The algorithm has good timing estimation performance and robustness .
  • WANG Zhong-wei, JIA Zhen-hong, QIN Xi-zhong, XIA Xiao-yan, DENG Lei
    Computer Engineering. 2014, 40(3): 127-131. https://doi.org/10.3969/j.issn.1000-3428.2014.03.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the growing contradiction between demand and supply of spectrum in wireless communications, this paper does further research on dynamic spectrum assignment based on Cognitive Radio Network(CRN), and presents a dynamic channel allocation strategy combining guard channel with queuing. In the case without affecting primary user’s services, because of arrival of primary user, the scheme provides reserved channels for handover secondary users, and queuing strategy is adopted for new arrival secondary users. When there is departure of ongoing primary user or secondary user, secondary users in queue can use idle available sub-channels according to priority-ranked. Compared with only reserving guard channel and only using queue buffer, the strategy has a small impact on secondary user average throughout and average delaying, and effectively reduces overall system failure rate and improves the performance of the system.
  • WANG Wei
    Computer Engineering. 2014, 40(3): 132-136. https://doi.org/10.3969/j.issn.1000-3428.2014.03.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that characteristics of the long distance band-type network will constrain the effect of general Wireless Sensor Network(WSN) routing protocols, and causes a “hot spot” problem in long distance band-type network, this paper introduces the energy balanced consumption multi-hop routing protocol CRLDB. It takes the idea of non-uniform cluster, concept of competition radius and the corresponding competitive strategy, and joins the number of cluster head node selection mechanism with the optimal number of cluster head, node residual energy and the surrounding neighbors, to balance nodes residual energy and transmission energy. Simulation by NS2 shows CRLDB routing protocol, compared with existing routing protocols Low Energy Adaptive Clustering Hierarchy(LEACH), LEACH-C for the survival time in the network, the network’s overall energy consumption and the amount of data received by base stations, CRLDB protocol with several performances in the above has a large improvement, can balance the network’s energy consumption and increase the network’s lifetime.

  • JIANG Shen, MA Rong-juan
    Computer Engineering. 2014, 40(3): 137-142. https://doi.org/10.3969/j.issn.1000-3428.2014.03.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The anomaly event detection problem in Wireless Sensor Network(WSN) is currently a hot topic. In order to improve the detection efficiency, this paper proposes an anomaly event detection scheme based on compressive sensing. The measurements of the sensed data are obtained based on the compressive sampling, and the anomaly event detection problem is modeled as the reweighted l1 minimization problem, which is iteratively solved by the Orthogonal Matching Pursuit(OMP) algorithm. Furthermore, the solution is judged by the detection function. The weight is refreshed in the next iteration according to the judgments, until all abnormal events are detected in Wireless Sensor Network(WSN). Experimental results show that the proposed scheme can obtain the lower probability of missed detection and false alarm in different noise environments. Compared with the CCM and GEP-ADS scheme, the energy consumption of this scheme id saved by approximately 4.1% and 5.8%.
  • TANG Ya-ping, XU Da-zhuan, ZHU Qiu-ming, REN Jia-min, ZHOU Sheng-kui, HUANG Pan
    Computer Engineering. 2014, 40(3): 143-146,151. https://doi.org/10.3969/j.issn.1000-3428.2014.03.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In traditional reverse ray-tracing and predicting model, the order of reflection and diffraction is two or less. In this paper, a new 3D reverse ray-tracing algorithm for accurate field strength prediction is proposed, particularly suitable for urban scenarios. The orders of reflection and diffraction are arbitrary. Virtual source tree is created by directing search, and after that all valid paths from transmitter to receiver can be found quickly by inversing search. The effects of order of reflection and diffraction on predicting precision and complexity are also analyzed. Numerical results of the simulation show that this algorithm has higher accuracy and efficiency, which is quite useful for the design and optimization of wireless network.
  • GUO Tie-liang, ZHAO Dan-feng, QIAN Jin-xi
    Computer Engineering. 2014, 40(3): 147-151. https://doi.org/10.3969/j.issn.1000-3428.2014.03.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To the Orthogonal Frequency Division Multiplexing(OFDM) Underwater Acoustic(UWA) communication systems, the traditional estimation algorithm based on data aided for Doppler Shift Factor(DSF) can reduce the system transmission rate. In addition, resampling algorithm is often used for the sample rate conversion in the receiving end, but there is a problem of large calculation in the process. Therefore, a novel algorithm about DSF estimation and sample rate conversion are proposed based on oversampling technique. The DSF can be estimated by comparing the sampling points with the transmitting and receiving signals. And the linear interpolation algorithm is adopted to finish the sample rate conversion. Theroy analysis and simulation results show that the algorithm not only can guarantee the system performance but also can reduce the receiver calculation amount. So it is applicable to the high-speed real-time UWA com- munication systems.
  • YU Guang-zhou
    Computer Engineering. 2014, 40(3): 152-157,162. https://doi.org/10.3969/j.issn.1000-3428.2014.03.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The multi-class target coverage problem is currently research hot in Wireless Sensor Networks(WSN). Aiming at the disadvantage of the existing target coverage algorithms, the multi-class target coverage problem is modeled as a maximization lifetime problem based on the Linear Programming(LP). This paper proposes a target coverage algorithm based on the clustering. According to the residual energy and sensing capability of nodes, the global coverage set is obtained on the basis of solving optimal solution within the each cluster structure, which is close to the optimal solution, moreover, the algorithm dispatches the corresponding sensing module to cover the target of having the same attributes within its sensing range. Experimental results show that the performance of this algorithm is superior to the CWGC algorithms in terms of the lifetime of network and time efficiency, close to the optimal value of LP.
  • LEI Hong-li, TIAN Yu, MA Lin-hua, RU Le
    Computer Engineering. 2014, 40(3): 158-162. https://doi.org/10.3969/j.issn.1000-3428.2014.03.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There are two issues should be satisfied when multiple unmanned aircraft groups implementing cooperative task. First, the command and cooperation data should be transmitted preferentially among the groups. Second, the inner group transmission rate should be fast. To solve these problems, a cluster-based MAC protocol Double Slots Hybrid Protocol(DSHP) is presented. The hybrid access protocol DSHP sets the key data transmission among groups as top target. The inter-group communication is processed through cluster head with Carrier Sense Multiple Access(CSMA) access protocol. The nodes in the same group can communicate directly with each other with Time Division Multiple Access(TDMA) access protocol. The cluster head is assigned two successive time slots. The two time slots send the same data during inter-cluster communication. The double slots can make sure that the inner-group data of other cluster heads only collide with the inter-group data in the first time slot, and the carrier sensing can make sure that the transmitted data in the second time slot won’t ruined by any inner-group data. Simulation results show that this protocol can prevent the inter-group communication from the interference of inner-group communication and improve the success rate of communication.
  • LIU Xiang-hui, HAN Wen-bao, WANG Zheng, QUAN Jian-xiao
    Computer Engineering. 2014, 40(3): 163-166. https://doi.org/10.3969/j.issn.1000-3428.2014.03.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    RSA algorithm is one of the most widely used public key cryptosystems at present and lattice attacks play an important role for the analysis of RSA system. The problem of partial discrete private key bit leakage is transformed into the solution of multivariate linear congruence equations and a special lattice is constructed. And then by the lattice reduction algorithms such as LLL algorithm, the small roots of multivariate linear congruence equations can be obtained with a high probability. Based on the above technology, this paper proposes a lattice attack method on RSA for discrete private key bit leakage. With this method, if the public parameter satisfies e=Nβ≤ N1/2 and the unknown part of private key d satisfies Nα≤N1/2–β, it can recover the private key d with a high probability. The experiment on 1 024 bit number is given with NTL package and the results verify the availability of the attack method.
  • WANG Qiu-yan, JIN Chen-hui
    Computer Engineering. 2014, 40(3): 167-170,174. https://doi.org/10.3969/j.issn.1000-3428.2014.03.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Grain cipher is one of the 3 final hardware-oriented stream ciphers of the eSTREAM project, it is based on two feedback shift registers and a filtering function, and it can effectively resist stream cipher attacks based on linear feedback shift register. In this paper, the nonsingularity of the Grain-like cascade feedback shift registers is investigated, the sufficient conditions of state refresh transformations in initialization phase and key stream generation phase being bijective is given. As a counterexample, for the word-oriented Grain-like cascade feedback shift registers, even if the two feedback shift registers are both nonsingular, and the filtering function satisfies proper conditions, the state update transformation can also be nonbijective. It proves the result of criteria for nonsingularity by using Grain v1 algorithm.
  • CHENG Wen-juan, DONG Ying-ying, HAN Jun-guang
    Computer Engineering. 2014, 40(3): 171-174. https://doi.org/10.3969/j.issn.1000-3428.2014.03.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most of the electronic auction schemes are assumed the existence of a trusted third party, which makes the security of the electronic auction decreased. Aiming at the problem, this paper proposes a sealed-bid electronic auction scheme based on an untrusted third party. It uses the digital signature technique to verify the identity of the bidder and ensure the privacy of the bidder’s identification. In calculating the transaction price, based on the intractability of the discrete logarithm, it encrypts and packages the binary length of the auction price to ensure the secrecy and accuracy of the results of auction price. Analysis results show that the scheme is simple and has high security, and the computational efficiency relative to most of the existing electronic auction scheme has greatly improved.
  • LIN Pan, CHEN Jian-mei, WANG Yuan-peng
    Computer Engineering. 2014, 40(3): 175-179,183. https://doi.org/10.3969/j.issn.1000-3428.2014.03.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the Electronic Medical Records(EMR) system is widely used in many hospitals, doctor can use computer to cloud interaction and share medical data. It extracts the medical record information by Clinical Decision Support Systems(CDSS), and helps doctor to make accurate diagnosis. To solve the problem of medical information disclosure or tamper in the transmission process of EMR system, medical data after B++ encoding is embedded in a patient’s finger. The finger with feature information and registration key are embedded in medical image by digital watermarking algorithm based on Non-subsampled Contourlet Transform(NSCT), and it can improve the safety of medical information in EMR system. Experimental results show that the method ensures higher robustness of the watermarking image, and it protects the integrity of the medical data by embedding high capacity medical information.
  • ZHUO Ze-peng, CHONG Jin-feng, WANG Hui
    Computer Engineering. 2014, 40(3): 180-183. https://doi.org/10.3969/j.issn.1000-3428.2014.03.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The correlation function of Boolean function can depict the diffusion characteristics and linear structure characteristics, and the properties of correlation function plays an important role in Boolean function theory. According to the definition of auto-correlation function and cross-correlation function, the auto-correlation function of a special form quadratic Boolean function is presented in this paper and the expression is given. Based on it, it gives the upper bound of auto- correlation function of three times Boolean function square, and the upper bounds of absolute indicators of two classes trace Boolean functions are investigated.
  • XIE Mi-mi, LIAO Xiao-feng, ZHOU Qing
    Computer Engineering. 2014, 40(3): 184-187. https://doi.org/10.3969/j.issn.1000-3428.2014.03.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For oblivious transfer, distributed settings can better protect the safety of the sender and the accessibility of secret message. So this paper presents a generalized oblivious transfer protocol in distributed setting based on secret sharing, which allows a user to select and retrieve a qualified subset of secret messages according to specific rules set by the sender. This protocol combines generalized secret sharing scheme and construction of polynomials, predefines the retrieve rules of messages by introducing the complement of secret sharing access structure and realizes the distributed setting with construction and reconstruction of polynomials. In the phase of construction, the sender builds three polynomials according to the encrypted messages, keys and verification value and sends the polynomials to the participating servers. The user obtains his requested messages by communicating with predefined number of servers. Analysis result indicates that this protocol is easy to implement with low computation complexity and ensures high efficiency and security as well.
  • JIANG Guo-rui, PANG Ting
    Computer Engineering. 2014, 40(3): 188-192. https://doi.org/10.3969/j.issn.1000-3428.2014.03.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Conflicts like price, amount and delivery time often appear during collaboration of manufacture supply chain, if not eliminated in time, it will affect the whole interest of supply chain. To effectively resolve the conflict of supply chain collaboration and make up for shortcomings of traditional negotiation, this paper proposes a self-adaptive negotiation method based on multi-Agent. This method takes the order of two-echelon supply chain, manufacturer and supplier, as the research object, the multi-Agent supply chain collaboration as constraint condition and the case-based reasoning as the main negotiation algorithm. Gray correlation degree is introduced into the calculation process of similarity for case set and target case. Genetic algorithm is used to optimize the weights of similar case issues. A numerical example is designed to prove this method makes the calculation of case similarity simplified which improves the efficiency of conflict resolution, and strengthens the self-adaptability which provides optimal decision for conflict resolution.
  • LI Ping, ZHU Jun-yan, PENG Fang, YANG Liang
    Computer Engineering. 2014, 40(3): 193-195,200. https://doi.org/10.3969/j.issn.1000-3428.2014.03.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Inspired by skeleton construction method visibility graph and graph search methods A*, obstacles are enveloped by rectangles, path points are generated as obstacle vertices’ extension, and a new path planning algorithm Lambda* is presented, which needs two lists. But differently, CLOSED list in Lambda* is used to store the path nodes from starting node sequentially, and OPEN list is used to save subsequence nodes for the extending node in CLOSED list. What’s more, SMOOTH process is added to smooth the path saved in CLOSED. For validation verification, Lambda* is used in 2D simulation environment. The simulation results show that Lambda* algorithm’s time consuming is decreased substantially with little increase of path length compared with A* algorithm.
  • LI Yue-guang, YIN Dong, ZHANG Rong
    Computer Engineering. 2014, 40(3): 196-200. https://doi.org/10.3969/j.issn.1000-3428.2014.03.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Towards the planning problem of robot for guiding service objects, Robot Operating System(ROS) as experimental platform, combined with Markov Decision Process(MDP) and micro-reboot technology, this paper presents a robot task planning scheme suitable for guide service. After considering comprehensive service object identity information and total cost of service process, the scheme establishes optimal execution plan using MDP model. And based on the ROS distributed system, the scheme uses micro-reboot self-repairing mechanism to solve the functional failure problem. The simulation results show that the proposal is effective in the implementation of navigation mission. Compared with the traditional processing method, it shows the advantages of micro-reboot technology in dealing with functional failures. The planning scheme gets 91.03% success rate in the case of additional barriers randomly.
  • TAN Fei-gang, YIN Chang-ming, ZHOU Shu-ren
    Computer Engineering. 2014, 40(3): 201-204. https://doi.org/10.3969/j.issn.1000-3428.2014.03.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a feature fusion method(WLD-LBP) based on an improved Weber Local Descriptor(WLD) and Local Binary Pattern(LBP) through a two-dimensional discrete haar wavelet transform. The algorithm starts with a two-dimensional discrete haar wavelet for the image so as to obtain the subimages of four different frequencies. Making full use of the WLD and LBP, we extract the WLD characteristics of the low frequency part, and LBP features of the other three high-frequency portion, and then a vector consisted with the characteristics of the image is produced which we called WLD-LBP characteristics. Five groups of test experiments were conducted on INRIA Person databases using SVM as classifier,comparing with the characristics of WLD, Histogam of Oriented Gradient(HOG), PHOG and feature fusion of HOG-LBP,respectively. The results demonstrate the effectiveness with the highest recofnition rate up to 98.1% and robustness to illumination and noise of the proposed method.
  • MA Bin, CHEN Jun-jie
    Computer Engineering. 2014, 40(3): 205-207. https://doi.org/10.3969/j.issn.1000-3428.2014.03.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Independent Component Analysis(ICA) is an effective method of data processing of the brain functional Magnetic Resonance Imaging(fMRI). Aiming at the feature that the spatial dimension of fMRI data is large, spatial ICA is selected to be discussed. The basic model structure of ICA and the three common algorithms of spatial ICA are deeply researched, including Infomax algorithm, Fixed-Point algorithm and Orth-Infomax algorithm. Chinese word meaning differentiation experiment is designed and analyzed with the linear correlation method. Experimental results show that, the time series of CTR in the Orth-Infomax algorithm has the maximum average correlation coefficient with the reference function, compared with Infomax algorithm and Fixed-Point algorithm, which has the high quality of the solution and the solving efficiency and can efficiently process the fMRI system data.
  • WANG Hui-ying, YUE Xiao-bo, ZHOU Kai-qing
    Computer Engineering. 2014, 40(3): 208-212. https://doi.org/10.3969/j.issn.1000-3428.2014.03.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Time complexity of the parallel reasoning algorithm based on Fuzzy Petri Nets(FPN) is related to the dimension of matrix, and it will increase when the scale of the FPN becomes larger. By analyzing the characteristics of the parallel reasoning algorithm and the relevant theories of the Reverse Search(RS), this paper proposes a novel Bi-directional Parallel Reasoning(BDPR) algorithm based on FPN. As for the model of FPN with the dimension of 11 rows and 8 columns, if using the BDPR algorithm, the reasoning matrix order is 7 rows and 6 columns. Experimental analysis shows that the BDPR algorithm can effectively improve the parallelism of the whole process of reasoning, reduce the time complexity of algorithm, and improve the efficiency of reasoning, compared with a general Fuzzy Reasoning(FR) algorithm and an RS algorithm.
  • HONG Liu-rong
    Computer Engineering. 2014, 40(3): 213-217,223. https://doi.org/10.3969/j.issn.1000-3428.2014.03.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Mice action data is an important part of the experimental data in neurology, physiological pharmacology, etc. A method is proposed on multi actions analysis of mice in this paper for lacking the limbs information. The inter-frame difference of mice silhouette are extracted while the relation between the difference and the position of mice silhouette is taken into consideration, then sequential silhouette difference frames are obtained from action videos. In training phase, the 80 key frames are extracted using Pillar K-means algorithm, each video is presented by the key frames and a histogram on frequency of key frame is obtained. In test phase, the histogram of each video is determined using its key frames by nearest neighbour algorithm. So, a classification problem is transformed into the similarities problem. Actions are classified by ?2 distances. Experimental results show that the correct rate of the proposed method is a maximum of 100%, and the lowest of 95%.
  • WANG Fang, LI Hua, DU Jin-ling
    Computer Engineering. 2014, 40(3): 218-223. https://doi.org/10.3969/j.issn.1000-3428.2014.03.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Conventional data quality detection method requires large number of initial data while traffic flow data at non-detector road is very limited. A new non-detector road traffic flow data quality detection method based on grey system theory is put forward to deal with the contradiction. The raw traffic flow data obtained by different detection points is processed into a set of sequence data. Through grey generating, calculating and standardizing of the set of sequence data, the closeness of the parameters λi which reflect the mutual relations between different data sequence is obtained. The purpose of detecting outliers is realized through the comparison of the size of λi and λ which is the selected threshold based on demand. Using the probe car traffic flow data which covers a local road network of Hangzhou, the efficiency of the proposed method is verified by comparing with the detection method based on similarity coefficient. The proposed method is better than the method based on similarity coefficient. For example, the average false detection rate of this method is lower than the method based on similarity coefficient by 21.00%, and the average accuracy rate is 28.64% higher than the latter one.
  • Miliwan.Xuehelaiti, Mairehaba.Aili, Tuergen.Yibulayin, JIANG Wen-bin
    Computer Engineering. 2014, 40(3): 224-227. https://doi.org/10.3969/j.issn.1000-3428.2014.03.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Uyghur which belongs to altaic language system is a typical agglutinative language and has large number of suffixes, and there is a big contrast with Chinese. According to the morphological characteristics of Uyghur language, this paper analyzes the Uyghur suffix’s role in Chinese-Uyghur statistical machine translation system. With the help of the Cheiro and exsiting technology it builds a hierarchical phrase-based Chinese-Uyghur statistical machine translation system. By comparing the performance of translation system with different granularity parallel corpora, experimental results show that the stem-affix representational units improve the performance of Chinese-Uyghur statistical machine translation system, and the BLEU value achieves to 0.197 2.

  • WANG Yan-peng, WANG Lei, ZOU Feng, QIAN Xin-qiao
    Computer Engineering. 2014, 40(3): 228-231,237. https://doi.org/10.3969/j.issn.1000-3428.2014.03.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the degeneracy phenomenon of particles caused by evoluting and the impoverishment problem of particles caused by resampling. This paper proposes a novel particle filtering algorithm based on knowledge plate and coevolution. The main idea of this algorithm is to sample from the importance density function and generate particle samples which are divided into several sub-sample groups. Each sub-sample group searches among different area and finds the optimal state estimation of this dynamical system by means of the communication between each other. Theoretical analysis and experimental simulation results show that the proposed algorithm improves population diversity and has potential advantages in convergence rate and computational complexity, thus enhances the searching performance of the algorithm.
  • DING Ying, LI Fei
    Computer Engineering. 2014, 40(3): 232-237. https://doi.org/10.3969/j.issn.1000-3428.2014.03.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Against the problems of Quantum Particle Swarm Optimization(QPSO) algorithm in the late part of iterations, such as decline of population diversity, slow convergence rate and easy to fall into the local optima, a cooperative double-center QPSO algorithm with self-adaptive contraction-expansion coefficient is proposed in this paper. It has two characteristics: (1)The contraction-expansion coefficient is self-adaptively adjusted, which can help jump out of the local optima and improve the global search ability; (2)It renews the global best location of every iteration in two ways, and one way of them is the same as that in QPSO, the other is that double-center particles, together with the current global optimal location, which is used to improve the way of renewing the global best location by cooperating in the corresponding dimensions among them. The performance of this algorithm is analyzed from fixed iterations and fixed precisions. Experimental results show that compared with QPSO, it has higher convergence rate and better search ability.
  • YANG Liu, CHEN Yong-lin, WANG Yi, TAN Li-wen, CHEN Wei
    Computer Engineering. 2014, 40(3): 238-243. https://doi.org/10.3969/j.issn.1000-3428.2014.03.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Computed Tomography(CT) images are significant in disease diagnosis, whereas graph cut model has been widely used in the automatic detection of complicated disease. Due to the fact that the complex area of medical images is very hard to model in conventional graph cut literature, this paper adopts the kernel trick in such a way that the segmentation of tumor is computed in the high dimensional kernel space rather than in the traditional spatial space directly. The processing of complex modeling and human-computer interaction is hereby avoided thought kernel trick. Moreover, Mercer’s theory proves that the computation of kernel method is implied and the model of different area is explicitly needless, which implies that the kernel graph cuts is universal to different applications. The proposed approach is validated on a real CT image data from clinical case, and the tumor is successfully extracted from the liver images. Results show that the proposed approach can be further ameliorated and applied to clinic as an auxiliary diagnosis assistant in the further.
  • HAN Dong-mei, WANG Wen, LI Bo-fei
    Computer Engineering. 2014, 40(3): 244-248. https://doi.org/10.3969/j.issn.1000-3428.2014.03.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to realize extraction of image low-level visual features and semantic reasoning, this paper starts from remote sensing image explanations, combines Gray Level Co-occurrence Matrix(GLCM) and Fuzzy C-Means(FCM) classifier to extract texture feature, then detects edge by multi-scale and multi-structuring elements based on grayscale morphology, finally constructs multi-sources geological data based on the fault zone and uses the Chengdu parcels to test and verify the model. The results completely coincide with the expert’s field studies, which demonstrates the feasibility of this model, makes the results of machine analysis closer to results of visual interpretation, and provides valuable preferences fordigitalization of the earth science study and informationization of image interpretation.
  • LIU Jin-rong, LI Chun-peng, OUYANG Jian-quan, LIU Jing
    Computer Engineering. 2014, 40(3): 249-252,257. https://doi.org/10.3969/j.issn.1000-3428.2014.03.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Depth image captured by active sensor is a current tendency, which is widely used in navigation, human computer interaction, augmented reality and so on. However, common low-cost sensors have their own disadvantages, such as low resolution, holes, unmatched boundary of edge. For these problems, this paper proposes a depth image enhancement algorithm based on improved joint bilateral filtering. Depth-based foreground segmentation method is adopted to figure out the set of pixels of unmatched foreground edge, and interpolation algorithm based on joint bilateral filter is used to fill the holes. Meanwhile, in order to make further improvement, guided depth similar item and gradient item are introduced to preserve edge structure. Experimental results show that compared with the existing joint bilateral interpolation, the improved method decreases the Mean Squared Error(MSE) by 13% in average which has better effect on edge-preserving.
  • YANG Bin, LI Xu-dong, YAN Lei, YU You-sheng
    Computer Engineering. 2014, 40(3): 253-257. https://doi.org/10.3969/j.issn.1000-3428.2014.03.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Small imaging coverage and big parallax angle make aerial remote sensing more difficult to image processing. This paper proposes an algorithm for aerial remote sensing image mosaic. The original image is corrected by perspective transform with estimating matrix which is determined by the relative attitude of two images. It solves the distortion problem caused by big parallax angle. Image registration is approached by the Scale Invariant Feature Transform(SIFT) algorithm and a method based on probability density. The image mosaic process is completed by the wavelet transform. Experimental result shows that the mosaic performance of this algorithm is better than traditional SIFT algorithm in different surface conditions.
  • ZHOU Qin-qing, CHEN Zun-de
    Computer Engineering. 2014, 40(3): 258-261,265. https://doi.org/10.3969/j.issn.1000-3428.2014.03.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For reducing the computational complexity of Compressed Sensing(CS) in Visual Sensor Networks(VSN), an image recovery algorithm based on quadratic programming is proposed. The under-determined linear equations in CS recovery are solved by bound-constrained quadratic programming, and an image recovery algorithm is designed based on the armijo rule. Theory analysis and experimental results show that the proposed algorithm can reduce 1/3 operation time of image recovery, and enhance the recovery quality by 3 dB~6 dB, thus significantly improve the real-time performance of image recovery in VSN.
  • CAO Pei-cai, LIU Chen-bin, ZHANG Hai-shi, HUANG Feng-ping, XIA Shun-ren
    Computer Engineering. 2014, 40(3): 262-265. https://doi.org/10.3969/j.issn.1000-3428.2014.03.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to get gray matter, white matter and background from brain Magnetic Resonance Image(MRI) accurately, an automatic segmentation method based on C-V model and Markov Random Field(MRF) is proposed. It uses C-V algorithm and morphology to preprocess the original image and remove the unnecessary brain tissue, and the image to be segmented is got. It introduces the local entropy of the gray scale field to estimate penalty factor and Markov random field model is used to achieve the segmentation of gray matter and white matter. The segmentation result is obtained by morphological methods. Experiments are carried out on 96 pieces of IBSR images and 46 pieces of clinical images using this method, results show that the proposed method can achieve the automatic segmentation of the brain MRI and has higher accuracy as well as faster processing speed than before.
  • LIU Lei-lei, JIANG Rong-xin
    Computer Engineering. 2014, 40(3): 266-269. https://doi.org/10.3969/j.issn.1000-3428.2014.03.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For strong noise of low-light video surveillance image, a new image noise reduction algorithm based on motion detection is proposed. The property of the noise of low-light video surveillance image is studied and the image is divided into 8×8 motion pixels blocks and still pixels blocks by a kind of threshold motion detection algorithm. An improved Wiener filter is designed and implemented for noise reduction of motion pixels blocks. The compact algorithm of mathematical morphology and median filtering for noise reduction of still pixels blocks is designed. Experimental results show that the time complexity of the algorithm is about O(n) and the value of PSNR and DV/BV of image after noise reduction is higher than other algorithms. This proves the time complexity of the image noise reduction algorithm is low, while the image noises are well reduced and there is little loss in the resolution of image.
  • LEI Hai-jun, YANG Zhong-wang, CHEN Xiao, YUAN Mei-leng
    Computer Engineering. 2014, 40(3): 270-273. https://doi.org/10.3969/j.issn.1000-3428.2014.03.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper analyzes coding unit algorithm of High Efficient Video Coding(HEVC) standard. Aiming at its high computational complexity, this paper proposes a fast coding unit decision algorithm for HEVC based on the coding unit correlation and texture of coding unit. This algorithm counts the correlation of current coding unit and adjacent coding unit, calculates texture complexity of encoding units, sets a reasonable threshold, and detects early termination, to quickly find the optimal coding unit. Simulation results show that, compared with the HEVC conference software HM8.0, the proposed algorithm can reduce about 37.4% encoding time and up to 48.2%, while it suffers from negligible on bit-rates performance.
  • YANG Dan, LU Gui-fu, ZHOU Ming-zheng
    Computer Engineering. 2014, 40(3): 274-277. https://doi.org/10.3969/j.issn.1000-3428.2014.03.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the characteristic of damage or lose data during video transmission, an algorithm of adaptive spatial error concealment based on geometric structure is proposed, which can recover the quality of image better. The two nearest surrounding layers of pixels are used to extract the local geometric structure. The proposed algorithm makes use of surrounding pixels of a damaged block to distinguish between smooth block and edge block. Bilinear Interpolation(BI) is used to process smooth block, while edge block makes use of the transition point to detect the edge direction and the lost Macro Block(MB) is partitioned into segments. Each pixel in segments directionally interpolated from the boundary pixels are adjacent to the segments. Experimental results show that the value of Peak Signal to Noise Ratio(PSNR) of reconstructed video is higher than that of BI and Directional Interpolation(DI) by 0.5 dB~3 dB for different rates of lost MB and different video sequences. The proposed algorithm can not only achieve good directional interpolation but also avoid fake edges. The image’s subjective effect has great improvement.
  • HUANG Ming-zheng, HAN Yi-shi
    Computer Engineering. 2014, 40(3): 278-282. https://doi.org/10.3969/j.issn.1000-3428.2014.03.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a zero memory access algorithm for direct Context-based Adaptive Variable Length Coding(CAVLC) decoding, aiming to solve the problems of slow decoding speed and a large of memory accesses caused by the repeated table look-up for the match codeword during the unstructured variable length coding table decoding. This algorithm takes the numbers of zero in CAVLC code prefix as the primary index, and gets the probably length of the input code stream. It takes the code suffix as the secondary index and gets the codeword value. It can get the decoded result by the codeword quickly. For the specific input code, the decoded output can be directly gotten without any table look-up. Test results show that not only the algorithm can achieve zero memory access for CAVLC decoding, but also the decoding speed of this algorithm can achieve 45% speed-up, compared with the standard algorithm.
  • HU Xiao-li, GUO Ji-chang
    Computer Engineering. 2014, 40(3): 283-286,293. https://doi.org/10.3969/j.issn.1000-3428.2014.03.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The proposed system of privacy-enabled object tracking is improved based on the primary one. After using the structurally random matrices at the encoder, the generation speed of random sampling matrix is improved. After using fast methods such as GPSR-BB in reconstruction and particle filtering in analysis at the decoder, the noise resistance of system is improved. It uses particle filtering algorithm for target tracking, reduces the tracking results error influence on compression perception recovery algorithm accuracy, as well as the time needed for analysis steps. Experimental result shows that the proposed framework enables the privacy in tracking and at the same time increases the robustness to illumination condition. It also avoids the error of tracking results affecting the accuracy of CS reconstruction algorithm, which is faster than the original one and the performance of tracking is excellent. The process time of this system is saved 30.3% and 51.6% of the BP and Lasso method.
  • ZHU Chun, LAI Jin-mei
    Computer Engineering. 2014, 40(3): 287-293. https://doi.org/10.3969/j.issn.1000-3428.2014.03.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A platform-independent multi-thread routing method for FPGA is proposed to reduce long compiling times of large design in Field Programmable Gate Array(FPGA) software system. Specifically, the proposed method includes two aspects for maximal paralleli- zation. For high fanout nets, each is partitioned into several subnets to be routed in parallel. Low fanout nets with non-overlapping boundary diagrams are identified and routed in parallel. A new graph, named boundary diagram, is constructed to facilitate the process of selecting low fanout nets to be routed concurrently. In addition, load balancing strategies and synchronization mechanisms are introduced to promote routing efficiency and ensure deterministic results. Experimental results on Intel quad-core processor platforms show that, this technique improves the average routing efficiency by 90% with routing quality degrading by no more than 2.3% and provides deterministic results, which has great theoretical and practical value in EDA field.
  • LI Xiang, JIANG Yun, ZHOU Ze-xun, XIE Guo-cheng, CHEN Na, CHEN Shan
    Computer Engineering. 2014, 40(3): 294-297,302. https://doi.org/10.3969/j.issn.1000-3428.2014.03.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to resolve the incompatibility of interconnection for many kinds of heterogeneous network device in the modern internet of things, this paper presents a new universal bridge which can support multiple network protocols at the same time, and the bridge based on the UPnP protocol. In the matter of protocol’s universal bridging, the bridge using the method of hierarchical processing, it provides an independent, protocol-related plug-in module for each heterogeneous network protocol at the bottom, and load to the bridge dynamically. This method results in a semi-open adapter engine. Experimental results show that under the condition without changing the bridge’s original structure, the bridge can dynamically provide or cancel the bridging support to any single network protocol, and provides a completely open and independent support for the new protocol. The bridge perfectly resolves the connectivity among many kinds of heterogeneous network device, and achieves a resource sharing and fusion of heterogeneous networks.
  • WANG Yan-xia, JIANG Yan-xia, WANG Ya-gang, LI Ye
    Computer Engineering. 2014, 40(3): 298-302. https://doi.org/10.3969/j.issn.1000-3428.2014.03.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    BMH2C algorithm combines BMH and BMHS algorithm. In BMH2C algorithm, the right shift distance of pattern string is determined by the double string, which is made of the character t[k] of the current window and the next character t[k+1]. BMH2C algorithm has better performance than BM, BMH and BMHS algorithm. Nevertheless, the right shift distance of pattern string in the BMH2C algorithm remains to be further increased, when the double string appears for one time or more than one time. Do like that, the number of window’s shift will be greatly reduced and the matching speed improved effectively. Therefore, an improved algorithm based on BMH2C algorithm is proposed. It takes the number of appearance in the pattern string of the double string t[k]t[k+1] into account. And the equality relationship of character t[k+2] and the character which is followed the appropriate position of t[k]t[k+1] in the pattern string is considered. The improved algorithm uses two right shift arrays and a pretreated array of the pattern strings. During the matching, the corresponding value of one of the two right shift arrays is selected as the right shift distance of current window, by judging the equality relationship of character t[k+2] and the corresponding character in the pretreated array. Experimental results show that the improved BMH2C algorithm is respectively 11.33% and 9.40% less than BMH2C algorithm on average in the same conditions, for the matching time and the number of window’s shift. With the algorithm, the matching speed is improved effectively
  • DUAN Wen-jia, LIU Xiao-jie
    Computer Engineering. 2014, 40(3): 303-305,309. https://doi.org/10.3969/j.issn.1000-3428.2014.03.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Failure detection is one of the crucial techniques to promise the disaster recovery system’s serviceability, and classical adaptive failure detection algorithm has the shortage of long failure detection time and high error rate. For this problem, this paper studies an adaptive failure detection algorithm λ-FD, based on exponential distribution. Simultaneously, λ-FD combines Pull heartbeat and Push heartbeat to achieve re-check. Experimental results show that λ-FD has the optimal performance when it sets the threshold to 0.68, the failure detection time to 1 339.5 ms and the error rate to 0.055 7%, and the latter is much lower than the error rate of Φ-FD, 15.19%, and the error rate of Chen-FD, 24.92%. So the error rate of λ-FD is generally lower than the classical algorithms which have the same failure detection time, and λ-FD takes the shortest failure detection time if its error rate is the same with classical algorithm, λ-FD can better adapt to the disaster recovery system in the Wide Area Network(WAN).
  • XU Tai-long, LU Shi-bin, DAI Guang-zhen, MENG Jian, CHEN Jun-ning
    Computer Engineering. 2014, 40(3): 306-309. https://doi.org/10.3969/j.issn.1000-3428.2014.03.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The low power design technologies such as Multi-supply Multi-voltage(MSMV) and Power Shut-off(PSO), present many challenges for the testability design of modern very large scale integration System-on-chip(SoC). Based on the efficient implementation platform constructed by using the industrial electronic design automation tools and the widely used testability methods, a testability design scheme that includes the scan chain, memory built-in-self-test and boundary scan is proposed. Experimental results show that the scheme can efficiently, conveniently and accurately complete the testability design of low power consumption SoC, and works correctly in automation test equipment. The test coverage of combinational and sequential logic scan chains is 98.2%.
  • HU Die, WU Jun-min
    Computer Engineering. 2014, 40(3): 310-314. https://doi.org/10.3969/j.issn.1000-3428.2014.03.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    March algorithms are designed very complex, and they are designed to be used to test memory chips, does not apply to the user. In order to solve above problems, this paper presents a parallel memory test method which utilizes the hardware features. It includes two parallel methods: one is chips-level parallel method which is designed according to the working principle of DDR2, and it can detect several memory chips. The other is Local Memory Controller(LMC)-level parallel method which is designed according to the working principle of memory controller, and it can detect several DDR2 memories. For chips-level parallel method, if more chips can be tested, the test time is faster. From testing one chip to eight chips, the test time is almost linearly decreasing. For LMC-level method, if there are more LMCs, the time is faster. From one LMC to two LMCs, the test time is reduced by almost doubled. Experimental results show that this method can greatly reduce the test time of the two algorithms, while still allow users to test memory.
  • XU Jia-ming, LI Xiao-dong, JIN Jian, MA Ying
    Computer Engineering. 2014, 40(3): 315-320. https://doi.org/10.3969/j.issn.1000-3428.2014.03.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Combined with the advantages of BM-Horspool(BMH) algorithm and Quick-Search(QS) algorithm, an effective algorithm to perform multiple patterns matching in a string is proposed on the concept of Fan-Su(FS) algorithm. To reduce the number of comparisons, and improve the efficiency of matching, the proposed algorithm skips as many characters as possible and avoids useless state transitions, by making full use of the successful and failing information during the matching. Experimental results show that the proposed algorithm can achieve more excellent performance than the ACBM algorithm and FS algorithm. The time it takes for the proposed algorithm to search a string is only 10%~35% of the AC algorithm, 50%~60% of the ACBM algorithm, 70% of the FS algorithm, and 65% of the FSQB algorithm.