Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 June 2014, Volume 40 Issue 6
    

  • Select all
    |
  • YAN Min, DAI Rong-xin, CAI Yi-jun, XU Huan, ZHENG Qian, CHENG Cheng
    Computer Engineering. 2014, 40(6): 1-4. https://doi.org/10.3969/j.issn.1000-3428.2014.06.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the embedded system’s characteristics of high integration and strong specificity, an embedded interrupt controller is designed based on the Advanced High-performance Bus(AHB). Interrupt controller enhances versatility and portability by using AHB interface. Advanced RISC Machines(ARM) processor can access the interrupt registers through the AHB bus to realize the interrupt detection, response, processing and priority configuration. The design is programmed by verilog-HDL language. Logic circuit is integrated, placed and routed by using the SMIC 0.18 μm CMOS process. Test results show that power consumption is 5.36 mW under normal working conditions, the worst situation for completing one interrupt operation only needs 0.7 μs, which can meet the requirements of real-time and low power consumption.
  • LIU Zhi, ZHANG Jing
    Computer Engineering. 2014, 40(6): 5-7,12. https://doi.org/10.3969/j.issn.1000-3428.2014.06.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low real-time performance and poor security during the period when the traditional database writes dirty data in the buffer pool back to disk, a database buffer pool real-time tuning strategy of dirty data writing back to disk is proposed in this paper on the basis of Hash algorithm and the First In First Out(FIFO) doubly-linked list. Multiple FIFO queue lists are created in the memory according to the tuning strategy based on workload, and the dirty data blocks in the buffer are randomly distributed in different lists by Hash algorithm according to their last modified time, in order to do load balancing across those FIFO queue lists. Meanwhile, the global timing constraints are built to help writing the dirty data blocks back to disk in batches to solve the problem of high resource consumption and high risk of data loss in downtime when adopting the traditional writing back strategy. Experimental results prove that this strategy can improve the real-time performance and security when writing back the dirty data, and reduce the loss rate of data.
  • WANG En-dong, WEN Yuan, ZHANG Yu, SHI Guang-yuan
    Computer Engineering. 2014, 40(6): 8-12. https://doi.org/10.3969/j.issn.1000-3428.2014.06.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Thin provisioning is an advanced storage virtualization technology which can improve storage resource utilization, so thin provisioning can meet the needs of information construction. Low resource utilization of traditional storage systems has become more and more serious, so how to improve the efficacy of traditional storage is a new problem which is needed be solved. This paper designs and develops a modular, hierarchical, efficient I_THINP thin provisioning architecture for Storage Area Network(SAN). It combines pool organization module, thin-provisioning module, thin reclamation module, dynamic expansion module and early warning module, and makes I_THINP have comparatively function of thin provisioning. Experimental result shows that I_THINP architecture increases the storage system resource utilization to over 98%, it improves two times compared with the traditional storage system, and applies to real SAN environment.
  • ZHANG Jing, CHEN Mo-liang
    Computer Engineering. 2014, 40(6): 13-15,28. https://doi.org/10.3969/j.issn.1000-3428.2014.06.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An algorithm-level energy consumption estimation model is proposed in this paper to solve the problem of algorithm-level energy consumption optimization design of embedded software. This paper takes Traveling Salesman Problem(TSP) as an example and uses different algorithms such as neural network algorithm and Genetic Algorithm(GA) for solving energy consumption estimation problem. By analyzing execution times, algorithm complexity and run time, energy consumption estimation value of different algorithms calculated by this model can be used to compare with energy consumption test value gained by simulation platform. Experimental result by simulation platform sim-panalyzer is presented that error analysis between estimation value and test value is about 10%. The accuracy of energy consumption estimation model is proved.
  • YUAN Shao-qin, YU Xiao-zhou, ZHOU Jun, WANG Rui, BAI Bo
    Computer Engineering. 2014, 40(6): 16-19,35. https://doi.org/10.3969/j.issn.1000-3428.2014.06.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The QB50 CubeSat network for the measurements in the lower thermosphere is an EU’s 7th Framework Programme(FP7) project and “AOXIANG-1” is one of the 50 CubeSats in QB50 CubeSat network atmospheric detection project. By means of analyzing the tasks and the operating environment of the CubeSat, a multi-task-oriented On-board Computer(OBC) system is designed based on advanced hardware and software collaborative design method. The system uses the high-performance domestic processor BM3109IB which is based on SPARCV8 framework as the central processing module. Centralized data processing and star business management method is adopted. To achieve the attitude determination and control, data processing and storage, operation mode management, daily work management and other functions of CubeSat, embedded real-time multi-task operating system is introduced for CubeSat task scheduling simultaneously. The OBC system of “AOXIANG-1” achieves a balance between power consumption, volume and performance, which satisfies the QB50 flight requirements.
  • LI Yi-bing, HUANG Hui, YE Fang, SUN Zhi-guo
    Computer Engineering. 2014, 40(6): 20-24. https://doi.org/10.3969/j.issn.1000-3428.2014.06.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To overcome the shortage of traditional signal feature detection algorithms in Cognitive Radio(CR) networks, a novel detection algorithm for primary user emulation based on two-dimensional features is proposed. It puts forward a new instantaneous characteristics parameter, which is called the average value of the zero-centered and normalized instantaneous energy’s absolute value, based on traditional decision theory. A two-dimensional vector, which is composed of the new parameter and box dimension, is constructed. This two-dimensional vector is used to judge whether the Primary User Emulation(PUE) attack is present or not, by using classifier based on Support Vector Machine(SVM). Simulation result shows that, the proposed algorithm can effectively identify primary user attacker while the signal-to-noise ratio is 5 dB, and even in 0 dB environment, it has a high detection probability of PUE attack, in guarantee to have little interference to primary user. Namely, the proposed algorithm has a strong anti-noise performance.
  • MAO Jian-lin, XIANG Feng-hong, FU Li-xia, GUO Ning, DUAN Shao-mi
    Computer Engineering. 2014, 40(6): 25-28. https://doi.org/10.3969/j.issn.1000-3428.2014.06.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Channel resource allocation between nodes in wireless mobile network is a random game problem of big population. An evolutionary game model is built, where the profit of success sending packet and the cost of overhearing/backoff/collision are considered. The Evolutionary Stable Strategy(ESS) is discussed and proved, and the replicator dynamics of MAC competition evolving is given. Numerical simulation result shows that the model can provide stronger evolutionary stable point concept ESS, it assures the robustness of evolutionary stable point between multi disturbing mobile nodes.
  • YU Xiao-dan, CHEN Xiao-min, TAN Wei
    Computer Engineering. 2014, 40(6): 36-39,44. https://doi.org/10.3969/j.issn.1000-3428.2014.06.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An Adaptive Transmission Power Allocation(ATPA) algorithm is presented for Vertical-bell Laboratories Layered Space Time(V-BLAST) system with feedback delay. The aim of proposing this algorithm is to minimize the overall Bit Error Rate(BER). At the receiver, Zero Forcing(ZF) detection technology is proposed and assuming that full channel state information can be obtained. The transmitter gets the information through a time delay feedback link. The algorithm computes the conditional Probability Density Function(PDF) of the real Signal Noise Ratio(SNR) according to the PDF of the SNR estimation, and the overall BER expression is worked out. At the transmitter, the transmit power of each antennas is obtained by solving the optimization problem under the total power constraint. Simulation results show that the proposed ATPA algorithm can effectively improve the BER performance. It is proved that if the performance requirement of BER=10–3, the normalized feedback delay factor is equal to 0.000 1, the modified V-BLAST system with adaptive transmit power allocation outperforms the conventional V-BLAST system about 5 dB when the feedback links with a delay.
  • DONG Zheng, GONG Ke-xian, GE Lin-dong
    Computer Engineering. 2014, 40(6): 40-44. https://doi.org/10.3969/j.issn.1000-3428.2014.06.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Symmetric α Stable(SαS) distribution noise is a kind of non-Gaussian noise, which has obvious pulse characteristics compared with the Gaussian noise. Therefore, the soft de-mapping designed in Gauss noise does not apply to SαS noise. According to the linear characteristic between the logarithmic likelihood ratio of soft de-mapping in Gauss noise and amplitude of the signal, the soft de-mapping algorithm in SαS noise is proposed. The main idea of the proposed algorithm which adds a preprocessing algorithm between soft de- mapping and decoding algorithm in Gauss noise. The bit soft information is limited by the preprocessing algorithm, and the soft information with large amplitude is set to zero. Simulation results show that the Generalized Signal-to-noise Ratio(GSNR) of proposed algorithm is 0.3 dB lower than Huber under the same bit error rate in SαS noise of α=1.84, and 2 dB~5 dB lower than Huber in SαS noise of α=1.3.
  • SUN Lu, LAN Ju-long
    Computer Engineering. 2014, 40(6): 45-48,52. https://doi.org/10.3969/j.issn.1000-3428.2014.06.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem that existing queue scheduling algorithms can only meet the Quality of Service(QoS) requirements of one specific type of traffic, and they can not support multiple types of traffic, a nested queue scheduling algorithm based on Differen- tiated Service(DiffServ) is proposed. The algorithm combines existing algorithms nestedly, schedules queues according to the nest model, and balances QoS guarantees for multiple types of traffic. Simulation results show that this algorithm can guarantee the QoS requirements of different types of traffic. Compared to existing algorithms, the proposed algorithm has a smaller distance between the effect of each indicator and the optimal effect, and it is able to meet the requirements of different kind of traffic while supporting more types of traffic.
  • LI Jun, YE Lan-lan, JIN Ning, LI Zheng-quan
    Computer Engineering. 2014, 40(6): 49-52. https://doi.org/10.3969/j.issn.1000-3428.2014.06.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to maximize Orthogonal Frequency Division Multiple Access(OFDMA) system capacity, this paper proposes a smallest channel capacity first subcarrier allocation algorithm. The algorithm assumes that all subcarriers can be assigned only to a user to calculate the capacity during the process of iterative water-filling, and assigns subcarrier to the user who has minimum capacity to avoid assigning the subcarriers which have the worst channel capacity to users. Simulation results show that this algorithm avoids the inaccuracy by using equal power to allocate subcarriers, and under the different Signal Noise Ratio(SNR) improves 15.7%, 12.2%, compared to Worst User First(WUF) allocation algorithm and Worst Subcarrier Avoiding(WSA) allocation algorithm, the system capacity has significantly improved.
  • LIU Wan-xian, PENG Hua, YU Pei-dong
    Computer Engineering. 2014, 40(6): 53-57. https://doi.org/10.3969/j.issn.1000-3428.2014.06.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the limited capacity for error tolerance of current polynomial estimation methods for Pseudo Noise(PN) code sequence, an improved polynomial estimation algorithm combining the statistical preprocessing and error-containing equations solving through soft decision is proposed. The algorithm utilizes the period property of PN code, and reduces the bit error rate by adding statistical preprocessing. Then, the generating polynomial of PN code is estimated by solving the error equations through soft decision. Simulation results demonstrate that the improved algorithm performs well in error tolerance, and the correct rate of estimation can reach above 90% when the Signal Noise Ratio(SNR) is –9 dB. Compared with the traditional algorithms, the SNR gain can reach at least 6 dB.
  • DENG Shao-jiang, ZHANG Xue-lin, TANG Ji-qiang
    Computer Engineering. 2014, 40(6): 58-63. https://doi.org/10.3969/j.issn.1000-3428.2014.06.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the existing problems in the current key pre-distribution strategies of Wireless Sensor Network(WSN), which have lower connectivity, higher storage consumption and less anti-attack capability, this paper proposes a new key distribution scheme for WSN based on the grid deployment model. This scheme divides the deployment area into some identical and non-overlapping hexagonal grids, allocates a number of different key spaces to each grid, and makes sure that any adjacent nodes only share a key space with each other. It distributes key information for sensor nodes using their deployment information. Finally, it analyses the performance from the aspects of storage overhead, network connectivity and security. Experimental results demonstrate that compared with existing key distribution scheme, the proposed scheme network connectivity rate is 1, and it not only reduces the storage requirement, but also substantially boosts ability of resisting random attack and area attack for WSN.
  • ZHOU Rui, WANG Xiao-ming
    Computer Engineering. 2014, 40(6): 64-69. https://doi.org/10.3969/j.issn.1000-3428.2014.06.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cloud storage service becomes a trend of the future development of storage. However, it also brings new security challenges. Cloud service providers may manipulate data for some purposes, so a reliable mechanism is needed to ensure the integrity of the cloud data. This paper proposes an integrity verifying algorithm for cloud data based on homomorphic Hash function. Under a trusted third party auditing, the algorithm can check the integrity of cloud data. It puts together a number of RSA signature for aggregation to verify the integrity of cloud data. In order not to disclose user data, it uses homomorphic linear authentication and random mask technique to achieve privacy-preserving. It not only can effectively resist server malicious attacks, but also can support data dynamics. Compared with the current audit algorithms, in the verification process this algorithm reduces the computational cost, and in the batch audit process it greatly reduces the communication cost, so as to improve the efficiency of verification.
  • LIANG Tao, LI Hua
    Computer Engineering. 2014, 40(6): 70-74. https://doi.org/10.3969/j.issn.1000-3428.2014.06.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of Logistic chaotic mapping that its iteration points are more concentrated and it has poorer ergodicity when used in image encryption, this paper puts forward an improved image encryption algorithm based on Skew Tent chaotic mapping and Deoxyribonucleic Acid(DNA) theory. It uses Skew Tent chaotic mapping to produce two groups of chaotic sequences to scramble the locations of the pixels, then codes the initial scrambled image with DNA theory to make it a DNA sequence, then uses Skew Tent chaotic mapping to produce one sequence to scramble the DNA one, and with DNA inverse transform gets the final encryption image. It simulates and analyzes the proposed algorithm from two aspects including security and scrambling degree, and compared with traditional scrambling methods such as Arnold transformation and Hilbert curve, and the algorithm based on Logistic mapping and DNA theory, the result shows that the algorithm has better encryption features.
  • GUAN Ya-wen, LIU Tao
    Computer Engineering. 2014, 40(6): 75-78. https://doi.org/10.3969/j.issn.1000-3428.2014.06.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In clustering Wireless Sensor Network(WSN), within the cluster nodes often multicast, to guarantee the security of message and node information, this paper designs a flexible and efficient group key management scheme. In this scheme, the calculation of the initial set of keys and group key update when node exits, are based on the identity of the broadcast encryption algorithms, the algorithm reduces the length of the broadcast message and transport energy consumption. The new node to join and the key at the end of life period of group key update use symmetric encryption method of energy consumption. This scheme can resist conspiracy attacks, counterfeit attack. Security analysis results show, under the same security standard, compared with other schemes such as EGKAS, this scheme takes up less storage space, lower energy consumption, and storage and group key update is independent of the size, has a good scalability.
  • TANG Quan-you, MA Chuan-gui
    Computer Engineering. 2014, 40(6): 79-84. https://doi.org/10.3969/j.issn.1000-3428.2014.06.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Fully Homomorphic Encryption(FHE) allows one to compute arbitrary functions over encrypted data without the decryption key. It is an important technology for private data protection in cloud computing. The highlight of constructing a FHE scheme is to successfully control the noise produced during the homomorphic operations of ciphertexts. The Sparse Subset Sum Problem(SSSP) is one of the basic hard problems used for the noise control. An improved reaction attack against FHE schemes based on the hardness of SSSP is proposed. The adversary can take special computation for the public key, and get the whole decryption key through access to the decryption oracle. Analysis result shows that compared with the known similar attacks, the advantage of the attack is the full use of pre-computing, which improves the efficiency and gains better applicability.
  • XIAO Zhen-jiu, TIAN Shu-jiao, CHEN Hong
    Computer Engineering. 2014, 40(6): 85-88. https://doi.org/10.3969/j.issn.1000-3428.2014.06.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For a small amount of the watermark information embedding, the robustness of the watermark and the poorer shielding, this paper proposes a watermark algorithm based on image texture complexity in the wavelet domain, which is according to the characteristics of the Human Visual System(HVS). The algorithm separates the texture regions by using the image entropy, and different texture is embedded by different amount of watermark. In this algorithm, the watermark image being embedded is transformed with Logistic mapping, and the watermark signal is embedded into the low frequency coefficients of the Discrete Wavelet Transform(DWT) transform, and controls the watermark embedding strength by using the human visual system. The Bit Error Ratio(BER) and Peak Signal-to-noise Ratio(PSNR) are used to evaluate the algorithm. Experimental results show that this algorithm can resist noise, filtering, lossy compression, cutting, key attacks, and other common attacks. It has a better robustness, and performs excellent concealment properties.
  • ZHOU Ling-ling, SHI Run-hua, ZHONG Hong, ZHANG Qing
    Computer Engineering. 2014, 40(6): 89-94. https://doi.org/10.3969/j.issn.1000-3428.2014.06.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Wireless Sensor Network(WSN), the attacker can reverse locate the location of the source node hop-by-hop and further attack the protected object, so it is essential to protect the source location privacy. For the shortage of routing through the visible area and further shortening the secure time in the existing phantom routing protocols, this paper proposes a new source-location privacy preservation protocol combining the directed constant altitude routing with phantom routing. Theoretical analysis indicates that this protocol completely avoids the failure path and increases the number of the effective paths, and thus it can improve the secure time. Simulation experimental results show that while source node’s location is fixed, the protocol can improve the safety time by 50% with litter packet latency and when source location changes frequently, the protocol has obvious advantages in communication compared with the PUSBRF.
  • TIAN Zhi-hui, JIN Zhi-gang, WANG Ying
    Computer Engineering. 2014, 40(6): 95-98,103. https://doi.org/10.3969/j.issn.1000-3428.2014.06.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Safety verification of security protocol is extremely important to ensure the safety of the network communication. Formal analysis makes it simple, standard and practical, which becomes one of research hotspots in the field of information security. A Colored Petri Net(CPN)-based verification approach to the 802.1x/EAP-MD5 authentication protocol is proposed. The protocol is modeled by CPN, and then the potential insecure state is analyzed during the run of protocol. The reachability of insecure state is judged by the reachability analysis of CPN. After analyzing the protocol, it presents an improved protocol to eliminate the discovered vulnerabilities of 802.1x/ EAP-MD5, which generates the session key encryption communication information by using pre-shared key mechanism, and improves the difficulty of Man in the Middle(MITM) and strengthens security of the network access authentication protocol by using digital certificate to the server for authentication.
  • GUO Song-chu, CUI Jie
    Computer Engineering. 2014, 40(6): 99-103. https://doi.org/10.3969/j.issn.1000-3428.2014.06.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the enhanced security requirements of the Vehicular Ad Hoc Networks(VANETs) applications, vehicles and service providers have a high demand on their privacy information. The traditional privacy preserving query schemes of mobile devices cannot both protect the privacy information of the vehicle and the service provider database. This paper proposes an efficient location service query scheme by using the Private Information Retrieval(PIR) technology. Vehicles verify other vehicles and roadside base station by anonymous authentication. Meanwhile, the data of the service provider database is confused by the secure coprocessor, and the query service is implemented by the Proxy Re-cryptography(PRC) algorithm. The scheme can protect the privacy information of both vehicle and service provider database. Analysis shows that this scheme not only achieves the vehicle’s anonymous query, but also protects both the location privacy of vehicles and the data privacy of service provider databases. Moreover, the communication cost is only twice, and has the high efficiency of communication.
  • PENG You, SONG Yan, JU Hang, WANG Yan-zhang
    Computer Engineering. 2014, 40(6): 104-108,114. https://doi.org/10.3969/j.issn.1000-3428.2014.06.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to its characteristics, the emergency management needs a great deal of inter-operation and coordination in a multi-domain environment. But the current solution that based on the Role-based Access Control(RBAC) induces the following security conflicts, which are cyclic inheritance, separation of duties and modality conflicts. Through a large number of experiences in developing emergency management information systems, this paper uses the method of organizational management and proposes a multi-domain access control mechanism based on the position. Via the analysis of the specific implementation process, it focuses on the resolution to deal with the security conflict problems, and combining with the practical application case to test and verify its correctness and feasibility.
  • HUANG Ru-fen, NONG Qiang, HUANG Zhen-jie
    Computer Engineering. 2014, 40(6): 109-114. https://doi.org/10.3969/j.issn.1000-3428.2014.06.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to effectively protect the legitimate rights of signer, to prevent unauthorized use of the blind signature, and to solve the costly management and use problems of certificate in traditional public key cryptography system, this paper proposes a certificate-based partially blind signature using bilinear maps, which incorporates the certificate-based encryption into partially blind signature system, with formal definition and security definition, and constructs a concrete certificate-based partially blind signature scheme. A rigorous security proof is given under the random oracle model, which security is based on the computational Diffie-Hellman Complexity Assumption. Results shows that the new scheme not only simplifies issue, management and storage of the certificate in based on traditional public key partially blind signature, but also overcomes the private key escrow problem in ID-based partially blind signature.
  • ZHANG Jun-yan, CHEN Qing-ming
    Computer Engineering. 2014, 40(6): 115-119,124. https://doi.org/10.3969/j.issn.1000-3428.2014.06.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the range of applications for the security chip continues to expand and the application environment is increasingly complex, the penetration testing of security chip is necessary, and the testing evaluation is also necessary. So this paper proposes a method of penetration testing the security chip based on attack tree. It analyzes the testing process for the penetration testing to the security chip and adopts the multi-attribute utility of attacks event. It proposes a quantitative calculating method of attack cost and an attack path analysis method. Application results show that the method is objective and effective. It can provide guidance for the implementation of the security chip penetration testing, and make rules for chip security measures.
  • TIAN Wei-dong, JI Yun
    Computer Engineering. 2014, 40(6): 120-124. https://doi.org/10.3969/j.issn.1000-3428.2014.06.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional frequent essential itemsets mining requires generating candidate itemsets and scanning database many times, which leads to the lower efficiency generation. Motivated by this, a fast algorithm of mining frequent essential itemsets is proposed. This algorithm uses Rymon enumeration tree as the strategy of space search and divide-and-conquer, meanwhile, it selects particular paths for pruning. It uses frequent essential itemsets unique properties to quickly determine whether a candidate itemset is a frequent essential itemset, without comparing with disjunctive support of all direct subsets. It is beneficial for quick mining. Experimental results show that this algorithm can correctly get all elements of frequent essential itemsets concise representation, and highly reduce the time consumption. It can reduce 2 times in dense datasets while reduce the time consumption in sparse datasets by 30% at least.
  • WANG Lian-guo, SHI Qiu-hong
    Computer Engineering. 2014, 40(6): 125-128,133. https://doi.org/10.3969/j.issn.1000-3428.2014.06.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Distance-based neighborhood topology, which will lead to large amount of calculation and slow speed and so on, is used by the basic Artificial Fish Swarm Algorithm(AFSA). Aiming at this problem, four typical population topologies(star topology, wheel topology, circle topology and John von Neumann topology) instead of distance-based neighborhood topology are introduced, and the effects of different neighborhood topology on the AFSA performance are analyzed. Experimental results in five functions show that the star topology is more suitable for unimodal function, wheel topology and circle topology are better for the functions of many local optimal points. And choosing appropriate topology for complexity of optimization problems can improve the optimization performances of AFSA.
  • QIANG Ning, KANG Feng-ju
    Computer Engineering. 2014, 40(6): 129-133. https://doi.org/10.3969/j.issn.1000-3428.2014.06.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When Multi-Agent System(MAS) is with limited resource, environmental information is unknown, and task is randomly generated in turn, considering the case of sequential task generated randomly, a new fitness function based on the balance of residual resource is designed, and an improved Binary Particle Swarm Optimization(BPSO) algorithm is proposed. The unbalance of system residual resource is defined by introducing penalty coefficient. The new fitness function considers not only the system profit, but also the balance of system residual resource, and makes a compromise between them by adjusting the penalty coefficient. The improved BPSO algorithm is used to optimize the coalition, redefine the particle velocity and position update formula. The particle divergent is effectively controlled, and the local search ability of the algorithm is improved. Simulation results show that, MAS using proposed fitness function can execute more tasks than using traditional fitness functions. The proposed algorithm has better performance compared with BPSO and Genetic Algorithm(GA) in terms of quality, convergence speed and stability of solutions.
  • SUN Yao-sheng, HUANG Zhang-can, CHEN Yu
    Computer Engineering. 2014, 40(6): 134-137,141. https://doi.org/10.3969/j.issn.1000-3428.2014.06.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Through analyzing and extracting the eel life behavior, this paper presents a new eels swarm intelligence algorithm for discrete problem. It describes the behavior of migratory eels, extracting three important behaviors which are concentration adaption, neighboring learning and sex mutation, and establishing a model for the mathematical description of the three important behaviors. Based on rational organization of the three important behaviors of eel and introduced of classification system and the thought of identification degrees, the discrete eel algorithm is proposed to combinatorial optimization problems. Especially for neighboring learning among discrete individuals, the paper uses a new method of cutting fragments, so that the information can be passed between individuals of the population to each other. Through the TSPLIB public test library of TSP problem to test the algorithm, results show that the algorithm has strong optimization capability.
  • MO Yuan-yuan, GUO Jian-yi, YU Zheng-tao, JIANG Nian-shu, XIAN Yan-tuan
    Computer Engineering. 2014, 40(6): 138-141. https://doi.org/10.3969/j.issn.1000-3428.2014.06.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existed hyponymy extraction methods of domain ontology concept are limited by manual annotation and specific patterns. Aiming at this problem, this paper proposes a hyponymy extraction method of domain ontology concept based on Cascaded Conditional Random Field(CCRF). It uses free text as extracting object, adopts two layers conditional random fields identifying the domain concepts. In lower-level conditional random fields model, it considers long distance dependence between words, makes modeling for words, and extracts concept in sequential, then obtains the concept with the characteristics of the template definition; In high-level model, it annotates semantic in pairs of concepts with hyponymy, identifies the hyponymy relation between domain ontology concept. Through real corpus open testing, the experimental results demonstrate the proposed method performs better.
  • ZENG Jie, SHEN Xian-gan, GAO Zhi-yong, LIU Hai-hua
    Computer Engineering. 2014, 40(6): 142-147. https://doi.org/10.3969/j.issn.1000-3428.2014.06.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the wide application of video surveillance, motion detection becomes a research focus. However, it becomes more difficult due to a lot of uncertainty in motion. Based on the human visual system more efficiently to perceive moving objects, researchers construct the model of motion perception based on the view of physiology and psychology. According to these studies, this paper proposes a moving object detection model simulating primary visual cortex. In this model, Classical Receptive Field(CRF) of simple cells in primary visual cortex is simulated by the three-dimensional Gabor spatial-temporal filter, and the motion energy as complex cells response is obtained by a nonlinear combination. Two characteristics: center-surround of cells and correlation motion detection is used to enhance motion information and suppress environment interference, and more information is fused to obtain a saliency map of moving object. WTA neural network model is used to achieve the perception of motion object. Experimental results show that the model can effectively detect the moving object in the video, and compute faster than other visual attention models based on visual neurons processing.
  • WANG Jing-xiao, GAO Qian-kun, WANG Qun-shan
    Computer Engineering. 2014, 40(6): 148-153. https://doi.org/10.3969/j.issn.1000-3428.2014.06.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Pegasos algorithm is an efficient method to solve large-scale Support Vector Machine(SVM) problems. The recent study shows that Pegasos can achieve the optimal O(1/T) convergence rate by adding epoches to the procedure of stochastic gradient descent. COMID(Composite Objective Mirror Descent) algorithm is a stochastic regularized case extended by mirror descent method, which can ensure the structure of regularization. However, for strongly-convex problems, the convergence rate of COMID is only O(logT/T). Aiming at this problem, this paper introduces the epoches to COMID and presents an optimal regularization mirror descent algorithm for solving hybrid L1 and L2 regularization problem. It proves that this algorithm achieves the optimal O(1/T) convergence rate and obtains the same sparsity as COMID algorithm. Experimental results on large-scale datasets demonstrate the correctness of theoretic analysis and effective- ness of the proposed algorithm.
  • HAN Wei, SHI Wei-wei, SI Wei-chao
    Computer Engineering. 2014, 40(6): 154-156. https://doi.org/10.3969/j.issn.1000-3428.2014.06.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An opposition niche genetic algorithm of multi-crossover chaotic selection is proposed to enhance global and local searching ability of the niche genetic algorithm. Piecewise linear chaotic map is brought to generate a chaotic sequence. Each element of this sequence is picked up before every crossover operation and corresponding crossover operator is chose according to the range of the element. Through the rest operation of niche genetic algorithm, excellent offspring population is obtained. Finally, opposition searching strategy is adopted to produce opposition offspring population. The ultimate offspring population choose better individuals from two populations to improve the local searching. Experimental results show the proposed algorithm is better than the other niche genetic algorithms in best solution and mean value. It shows that the algorithm is feasible and effective.
  • ZHANG Yin-feng, GUO Hua-ping, ZHI Wei-mei, FAN Ming
    Computer Engineering. 2014, 40(6): 157-161,165. https://doi.org/10.3969/j.issn.1000-3428.2014.06.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming to solve the problem of the low classification performance on imbalanced data caused by the construction on the balanced data set, this paper proposes a new simple but effective Ensemble Pruning Method Based on Positive Examples(EPPE) to improve the classification performance of ensemble on imbalanced data sets. It establishes classifier database, directly treats positive(minority-class) cases as pruning set, and selects an optimal or sub-optimal classifier based on the index of MBM and pruning set as target classifier to predict classification cases. Experimental results on twelve UCI data sets indicate that EPPE not only significantly improves the recall rate of pruning set on positive(minority-class) cases, but also increases its overall accuracy compared with EasyEnsemble, Bagging and C4.5 algorithm.
  • ZHOU Jian-feng, YANG Ai-min, ZHOU Yong-mei, WANG Xuan-xuan
    Computer Engineering. 2014, 40(6): 162-165. https://doi.org/10.3969/j.issn.1000-3428.2014.06.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Analysis and monitoring of emotion information in micro-blog texts can help mine user behavior and offer the reference for the micro-blog public opinion supervision. However, micro-blog texts have the characteristics of short length, non-standardization, existence of a large number of anagrams and new words, etc. To classify micro-blog texts based on sentimental feature only lead poor accuracy. It is also difficult to meet practical demands. Therefore, a word stock of bigram collocation based on micro-blog corpus is constructed, and the PMI-IR-P algorithm is proposed to calculate the semantic weight of collocation based on PMI-IR algorithm. Combining the sentiment dictionary, micro-blog sentimental feature vector is generated by adopting statistical method. The C4.5 algorithm is used to establish classification models, so as to classify the sentiment polarity of the micro-blog. In the experiment, different data sets are utilized to construct collocation stock and classification models, and the result with the method based on sentiment dictionary is compared with rules as well as the Naive Bayes method. Experimental results show that with the help of C4.5 algorithm, the accuracy rate of micro-blog text sentiment classification reaches 87%, which has better effect.
  • ZHANG Pei-qian, WANG Zhi-hai
    Computer Engineering. 2014, 40(6): 166-170. https://doi.org/10.3969/j.issn.1000-3428.2014.06.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional machine learning algorithms can not effectively deal with the network data because of the existence of autocorrelation between instances. Regression inference in network data is a challenging task, while many algorithms for network classification existing, there are very few algorithms for network regression. Aiming at the regression prediction problem in network data, this paper takes autocorrelation into account and proposes an Iterative Weighted Linear Regression(IWR) algorithm. Weighted regression is taken as local predictor during an iterative learning process. The predicted labels are changed each step until meet the requirement. Experimental results with spatial and social networks show that the proposed algorithm is effective to reduce prediction error compared with traditional regression algorithm as well as NCLUS algorithm.
  • MA Qian-li, ZHANG Jun-hao
    Computer Engineering. 2014, 40(6): 171-174,179. https://doi.org/10.3969/j.issn.1000-3428.2014.06.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the social networks, community and circle are groups of vertices with relatively dense intra-connection, but the circle is of small-scale. Intuitively, circles are important local information and community detection can benefit from them. Unfortunately, in most existing label propagation methods for community detection, the circle-based information is not taken into account. Aiming at this problem, this paper proposes a Local Strengthened Multi-label Propagation(LSMLP) algorithm for community detection. It first gives the definition of circle and then proposes an iterative strategy for multi-label propagation by using circle-based information. Based on a modularity optimization, a unique label can be selected from multi-labels. Performance properties of the LSMLP are discussed and compared with some related methods on several real networks. The method is more highly efficient and effective for uncovering communities.
  • XIA Pei-pei, ZHANG Li
    Computer Engineering. 2014, 40(6): 175-179. https://doi.org/10.3969/j.issn.1000-3428.2014.06.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional similarity learning algorithms consider all training samples when constructing paired-samples, which would lead to a larger paired-sample space that depends on the training samples in a square fashion. It is well-known that it is time-consuming when Support Vector Machine(SVM) processes a large-scale classification problem. Aiming at this problem, this paper proposes an improved similarity learning algorithm using SVM, and applies it to face recognition. This paper introduces a new paired-sample construction method, called pairwise sample method. In order to speed up the training procedure, it adopts K Nearest Neighbor(KNN) algorithm to reduce the number of dissimilar paired-samples. In addition, the random projection method is introduced to reduce the dimensionality of face data. Experimental results show that the improved algorithm has better classification performance compared with algorithm based on difference space paired-sample and algorithm difference absolute value paired-sample.
  • LI Zhi-wei, GE Hong-wei, YANG Jin-long
    Computer Engineering. 2014, 40(6): 180-184,189. https://doi.org/10.3969/j.issn.1000-3428.2014.06.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that affinity matrix is constructed inaccurately and the clustering result is unstable in traditional spectral clustering algorithm, a spectral clustering algorithm based on neighborly relation propagation and mode merging is proposed in this paper. According to the principle of neighborly relation propagation, first update the similarity between samples in same subset, then it designs a local-max similarity updating method to update the similarity between samples in different subsets, uses the mode merging technology to merge these subsets whose numbers are more than the real clustering’s to obtain the coursing cluster, and further to update the similarity between samples in different coursing cluster and achieve the final affinity matrix. It applies this matrix to realize the spectral clustering. Experimental results show that after the secondary updating, the similarity between samples in the same cluster is relatively enlarged, and the similarity between samples in the different clusters is relatively reduced. Compared with the neighbor propagation spectral clustering algorithm, using the proposed algorithm can obtain the more accurate and stable clustering results.
  • XU Tao, YU Hong-zhi, JIA Yang-ji
    Computer Engineering. 2014, 40(6): 185-189. https://doi.org/10.3969/j.issn.1000-3428.2014.06.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Tibetan document representation is to transfer the non-structure Tibetan text into an information form which can be processed by the computer, which is the premise of the categorization and clustering of the Tibetan text. Traditional Tibetan document representation methods take little relational degree of the feature items into account. As a result, some semantic information will be lost, and the accuracy of the document representation will be reduced. Integrated with the Vector Space Model(VSM) which is a classical model in information retrieval, this paper proposes a new document representation method. The terms with high value of TF-IDF are extracted as compared terms first, and then Tibetan sentences are segmented from Tibetan document as context subject, and the Chi-square statistic is used to compute the degree of bias between each term and the compared terms. Experimental results show that this method works more accurately than the traditional VSM in Tibetan document representation.
  • ZHOU Ru-qi, FENG Jia-li, ZHANG Qian
    Computer Engineering. 2014, 40(6): 190-194. https://doi.org/10.3969/j.issn.1000-3428.2014.06.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The qualitative mapping can be easy to express fuzzy uncertain knowledge, but it is not a good representation method of dynamic characteristics of cognitive thinking action. Fuzzy Petri net is more consistent with human’s thinking mode, but its parameters are not easy to be obtained and it has limitations in self-learning ability. For these reasons, the formal concept and modeling method of Fuzzy Attribute Petri Net(FAPN) are defined. The learning method about the parameters is constructed in the FAPN structure. Four types of fuzzy qualitative judgment rules and the operation formulas of the transition node are defined based on the qualitative mapping. The reasoning algorithm and the learning method of FAPN are proposed, which can simulate the dynamic process of the network system. Analysis results show that, the proposed method can make FAPN have better learning ability, and it is also useful in the diagnosis system characterized with the qualitative judgement.
  • TAO Shu-yi, WANG Ming-wen, WAN Jian-yi, LUO Yuan-sheng, ZUO Jia-li
    Computer Engineering. 2014, 40(6): 195-200. https://doi.org/10.3969/j.issn.1000-3428.2014.06.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional text clustering methods are only suitable for static sample, and their time complexity is too high. Aiming at these problems, this paper proposes a new Incremental Text Clustering Algorithm Based on Congruence(ITCAC) between text and cluster. The new algorithm can avoid a lot of double counting to improve the performance of clustering. It uses text representation model based on semantic similarity of lexical items, fully takes the semantic information between terms into account and computes the congruence between new documents and existing clusters. After processing part of the documents, the algorithm reassigns the categorization of documents that has large possibility of misclassification to further improve the clustering performance. Experimental results on 20 Newsgroups datasets show that, compared with the k-means algorithm and SHC algorithm, the new algorithm not only has less clustering time, but also has better performance of clustering.
  • YU Feng, YU Zheng-tao, YANG Jian-feng, GUO Jian-yi, YAN Xin
    Computer Engineering. 2014, 40(6): 201-205. https://doi.org/10.3969/j.issn.1000-3428.2014.06.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Focusing on the characteristics of recommending experts automatically for project evaluation, this paper proposes a kind of expert recommendation method based on the topic information. Firstly, it analyzes the attributive characteristics of the project documents and the expert documents, then uses Latent Dirichlet Allocation(LDA) topic model to obtain the topic words from each document according to its characteristics. Secondly, it constructs the topic feature space of the documents though the method of topic words frequency statistics, and uses TF-IDF feature extraction algorithm with the importance of the document columns to obtain the topic feature vectors of the project documents and the expert documents respectively. Finally, it uses an improved algorithm of similarity calculation to calculate the correlation of the topic feature vector of the project and the vector of each expert. The experts with a high correlation of the project are chosen as the result of expert recommendation. Experimental results show that the recommendation effect of the proposed method is better than the method based on the TF-IDF and cosine similarity calculation and the algorithm of cosine similarity calculation. The precision, recall and F-score are increased by 4.87%, 5.04% and 4.97% on average.
  • WEI Su-yun, YE Ning, YANG Xu-bing
    Computer Engineering. 2014, 40(6): 206-210. https://doi.org/10.3969/j.issn.1000-3428.2014.06.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When searching the nearest neighbor set, the traditional item-based collaborative filtering algorithm only takes into account the similarity between the items, which ignores the impact of the items categories similarity and the time factor on recommendation. Aiming at the above problems, an improved item-based collaborative filtering algorithm combining items categories similarity and dynamic time weight is proposed. In this algorithm, the items category similarity is introduced to improve the accuracy of the similarity between items, and two kinds of weighting functions are constructed to incorporate temporal information into the prediction algorithm so as to adapt to changes in both user and item characteristics over time. Experimental results show that the proposed algorithm can efficiently trace the drifting of users’ interests and items’ popularity and improve the accuracy of the prediction.
  • WANG Zi-qiang, WU Ji-gang
    Computer Engineering. 2014, 40(6): 211-214. https://doi.org/10.3969/j.issn.1000-3428.2014.06.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Reward function is always simple in traditional Q-learning algorithm, which makes a low learning efficiency. To solve this problem, a Reward Detailed Classification Q(RDC-Q) learning algorithm is proposed. It synthesizes all sensors’ value of the robot and divides the robot’s states into 20 damp states and 15 reward states according to the distance between the robot and obstacles. The reward value that a robot gets at each time step is classified by the robot’s security level, which makes the robot go towards the states in higher security levels, so the robot can learn quicker and better. By simulating the new algorithm in an environment of dense barriers, it is proved that convergence speed of the new method is obviously improved than traditional reward Q methods.
  • ZHANG Yan, WU Bao-guo, LV Dan-ju, LIN Ying
    Computer Engineering. 2014, 40(6): 215-218,229. https://doi.org/10.3969/j.issn.1000-3428.2014.06.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Both semi-supervised learning and active learning attempt to exploit the unlabeled data to improve the recognition rate of supervised learning algorithms and minimize the cost of data labeling. So this paper proposes an algorithm to select samples in active learning such as Entropy Priority Sampling(EPS). It combines with the Tri-training algorithm and active learning method. Experimental results on both the UCI and image datasets under different proportion of marker training samples show that, this algorithm can obtain better result in the case of fewer labeled examples, and the combination of the active learning with semi-supervised learning is an effective way to improve the performance and generalization.
  • ZHANG Kai-jun, LIANG Xun
    Computer Engineering. 2014, 40(6): 219-225. https://doi.org/10.3969/j.issn.1000-3428.2014.06.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Support Vector Machine(SVM) is an important method in the statistical machine learning, which is widely used in the pattern recognition, regression analysis and so on. However, the general SVM does not consider the distribution of the whole sample, which influences the generalization ability. Aiming at this problem, this paper brings the SVM with the Mahalanobis distance, which considers the distribution of the whole sample and expands it to the multiple kernel model. By using the mathematics method, the paper successfully transfers the Euclidean distance kernel matrix to the Mahalanobis distance kernel matrix and makes the algorithm easily achieved. Experimental results show that the multiple kernel model based on the Mahalanobis distance gets higher classification accuracy, and the algorithm keeps the character of multiple kernel model based on the Euclidean distance.
  • LIU Wan-jun, MENG Yu, QU Hai-cheng, SHI Cui-ping
    Computer Engineering. 2014, 40(6): 225-229. https://doi.org/10.3969/j.issn.1000-3428.2014.06.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To meet different needs of different users to the quality of remote sensing images in heterogeneous net-work environments, a remote sensing image progressive transmission model is constructed with off-line compression, real-time transmission and real-time decompression in this paper. At the same time, a pipeline-based multi-thread acceleration method is proposed through solving the asynchronous problem among compression, decompression and transmission to improve the efficiency of remote sensing progressive transmission. JPEG2000 multi-resolution image compression algorithm is employed in this model to increase the compression ratio, reduce transport traffic and then reduce the burden of network transmission. Experimental results show that the whole processing speed is improved nearly twice without reducing image quality by using proposed progressive transmission of off-line compression model. On the other hand, this proposed progressive transmission model is better in visual effect as contrasted with the multi-resolution progressive transmission
  • JIA Di, DONG Na, MENG Xiang-fu, MENG Lu
    Computer Engineering. 2014, 40(6): 230-233. https://doi.org/10.3969/j.issn.1000-3428.2014.06.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the process of image acquisition, the images which contain noise or poor contrast are often obtained. In order to realize image denoising and contrast enhancement, this paper presents a simultaneous denoising and enhancement method to process image based on vector diffusion control. The structure of Total Variation(TV) model is analyzed, the problem is pointed out, and edge intensity is better control by introducing the vector diffusion of the transformation model. The differential model of Contrast Limited Adaptive Histogram Equalization(CLAHE) is proposed, which is combined with the improved TV model to implement synchronization of image denoising and contrast enhancement. The results are comparing by the quality and gray distribution of two group experiments, and the effectiveness of this proposed method is verified. The results show that the method not only can resolve the staircase appearing in TV model during denoising process, but also can improve the contrast of image, so it can improve the quality of image.
  • WANG Han-yu, GUO Hao, AN Ju-bai, WANG Ning
    Computer Engineering. 2014, 40(6): 234-237. https://doi.org/10.3969/j.issn.1000-3428.2014.06.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    MODIS is multispectral remote sense data, which not only has higher transit frequency and spectral resolution, but also has the advantages of low cost, wide coverage and other advantages. Due to the effect of the earth curvature, most of the MODIS images exist an overlapping effect namely Bowtie effect, mainly occurs in the fringes of the image, which restricts the further analysis and application of MODIS remote sensing data. In view of the remote sensing image geometric distortion problem, this paper presents an algorithm of the Bowtie effect removal study without using the ephemeris, which is characterized by the use of the correlation coefficient to determine each scan with the number of duplicate rows, as well as a more effective re-sampling method is taken to different resolutions of MODIS L1B data. It demonstrates the effectiveness of the algorithm through the removal results of MODIS Bowtie effect with the results with other classic algorithms. Results show that the proposed algorithm not only can effectively remove Bowtie effect of MODIS, has a faster processing speed, but so has high engineering application value.
  • GU Wen-jiao, ZHANG Hua-xiang
    Computer Engineering. 2014, 40(6): 238-240,246. https://doi.org/10.3969/j.issn.1000-3428.2014.06.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Nowadays Content-based Image Retrieval(CBIR) is still the most common way of image retrieval. In order to improve the precision, this paper proposes a new approach to image retrieval which explores the integration of textual and visual information. In the process of image retrieval, a technique which automatically transforms textual queries into visual representations is presented. The relationships are mined between texts and images and the relationships are employed to construct a cross-media dictionary to automatically transform textual queries into visual ones. It combines the retrieval results of textual and visual query as the final results. Experimental results show that the proposed approach can effectively improve the image precision.
  • YANG Shao-hua, PAN Chen, WEI Li-li
    Computer Engineering. 2014, 40(6): 241-246. https://doi.org/10.3969/j.issn.1000-3428.2014.06.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the performance of blood cell images recognition, a method of colorful blood cell images recognition based on kernel function is proposed. The blood cell image is normalized by colorful histograms and local histograms. Kernel Principal Component Analysis(KPCA) is used to extract the nonlinear features and reduce the high dimensionality of data representation. The features are weighted by Support Vector Machine(SVM). The classifier of multiclass is composed by SVM and Nearest Neighbor(NN). The total system is a support vector network for classification task actually. In order to train this network automatically, relevance feedback is utilized for adjusting parameters. The validity of method above is proved by results based on database of colorful blood images.
  • LI Yu-jian, YIN Chuang-ye, YANG Yong
    Computer Engineering. 2014, 40(6): 247-251. https://doi.org/10.3969/j.issn.1000-3428.2014.06.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Weight setting has a great impact on performance of graph matching models. Weights by direct calculation often produce unsatisfactory correspondences between real images. Based on the idea of learning graph matching for quadratic assignment problems, this paper considers weight learning method for first-order and second-order maximum weight matching models. In a first-order maximum weight matching model, image feature points are regarded as vertices of a bipartite graph, whereas in a second-order maximum weight matching model, edges connecting two feature points are viewed as vertices. Both of the first-order and second-order models can be solved by the Kuhn-Munkras algorithm. The first-order maximum weight matching model is essentially equivalent to the linear quadratic assignment problem. Experimental results on the CMU House database show that the second-order maximum weight matching model can totally outperform the first-order maximum weight matching model, and both of them perform better in the case of weight leaning than direct calculation.
  • ZHANG Shao-bo, QUAN Shu-hai, SHI Ying, YANG Yang, LI Yun-lu, CHENG Shu
    Computer Engineering. 2014, 40(6): 252-255. https://doi.org/10.3969/j.issn.1000-3428.2014.06.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to overcome the disadvantages of Content Based Image Retrieval(CBIR), the study on RGB color moments and Munsell color moments shows that RGB color moments is only suitable to the simple pictures, while Munsell color moments cannot characterize the specific information of pictures. Using information fusion technology to retrieve the fusion of judgement level, multiple color space fusion color moment gives a better solution. By contrasting with the traditional fusion algorithm, an improved fusion algorithm named NewcombMNZ is proposed. Experimental results show that the improved algorithm can improve the retrieval accuracy and ranking value, and a good robustness to noise. It reduces the effect of poor features on fusion, and can be used in the case of few features.

  • WANG Hai, TONG Heng-jian, ZUO Bo-xin, TANG Wen-rui
    Computer Engineering. 2014, 40(6): 256-261. https://doi.org/10.3969/j.issn.1000-3428.2014.06.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In remote sensing image processing and analysis software, image segmentation/classification and vectorization are two separate processes: (1)Segmentation/classification of the whole image; (2)Vectorization of the whole segmented or classified image. Vectorized file can only be used for displaying and can not be used for a subsequent operation because the feature information of image objects(regions, parcels) is not written into the file. Furthermore, there exists an inconsistent between the polygon number in vector file and the number of image objects in the segmented or classified image when the image is complex. In order to solve these two problems, this paper integrates multiscale segmentation algorithm and vectorization algorithm. Firstly, remote sensing image is divided into the collection of image objects by the multiscale segmentation algorithm. Secondly, each image object is vectorized, at the same time feature statistical information of each image object is written into the polygon’s attributes. By the proposed method, the polygon number in vector file equals the number of image objects in the segmented or classified image. Moreover, subsequent multiscale segmentation, region merging and the operation of spatial relationship, etc. can be executed based on the vector polygon, because all feature statistical information of each polygon is saved in vector file.
  • WU Hai-xia, FENG Wei, RAN Wei
    Computer Engineering. 2014, 40(6): 262-266,271. https://doi.org/10.3969/j.issn.1000-3428.2014.06.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of the global asymptotical stability of the genetic regulatory network with interval time-varying delays, this paper employs the Lyapunov-Krasovskii functional with Linear Matrix Inequality(LMI) techniques and new integral inequality approach, and derives some improved stability criteria. The present result takes into account the relationship between the time-varying delays and their lower and upper bounds. Due to the new integral inequality approach, the proposed method contains the least number of computed variables while maintaining the effectiveness of the stability conditions. The result can provide some reference for the design of gene chip. Two numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method.
  • LI Shou-ju, YU Shen, SUN Zhen-xiang, CAO Li-juan
    Computer Engineering. 2014, 40(6): 267-271. https://doi.org/10.3969/j.issn.1000-3428.2014.06.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on observed data for three-dimensional compressive test of rockfill material, a parameter inversion method of nonlinear constitutive model is proposed using neural network for estimating parameters of rockfill. The relationships between axial loads and strains are analytically expressed by linear approximating for axial and radial strains in three-dimensional compressive test. In order to validate the effectiveness of the proposed method, the three-dimensional compressive test of rockfill material is performed in laboratory. Experimental result shows that, compared with the estimation method for model parameter based on gradient optimization search, the proposed method provides higher prediction accuracy of the behavior of rockfill material tested, and the maximum relative error decreases by 17.8%.
  • WANG Nan, LU Yu, GUO Chun-sheng, WANG Qiu-zhu
    Computer Engineering. 2014, 40(6): 272-274,280. https://doi.org/10.3969/j.issn.1000-3428.2014.06.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The uniform sampling rate is commonly used in the video reconstruction so that it is difficult to improve its reconstruction quality. A novel method for variable rate sparse sampling is proposed in this paper. The edge of frame difference is detected by the adaptive threshold. And the pixel blocks are classified as active blocks and passive blocks according to the edge. The active blocks are sampled by the high rate while the passive blocks are sampled by the low rate. Combining the smooth filtering and the iterative steps for projection of convex sets, video reconstruction is accomplished by the block optimization. Different from the commonly used uniform sampling, the proposed method properly exploits the motion texture of video. The pixel blocks with salient motion are sampled by the high rate so that the reconstruction quality is enhanced. Simulation results show that the proposed variable sampling method can reduce the block artifacts and obtain higher peak signal to noise rate than the uniform sampling method.
  • BAO Jie, WANG Ling-li
    Computer Engineering. 2014, 40(6): 275-280. https://doi.org/10.3969/j.issn.1000-3428.2014.06.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the field of Field Programmable Gate Array(FPGA) automation integrated process, Boolean Matching(BM) is one of the core problems. The Boolean matching method based on Bloom filter needs to consume a large amount of storage space and sacrifices part coverage of realizable function. Aiming at this problem, this paper proposes a Boolean matching method, presents a rule expression form based on Boolean function, which classifies the Boolean function, and carries on dynamic learning in the process of Boolean matching. Experimental results show that, the storage space of Boolean matching can be saved up to 96% by the function classification, and dynamic learning strategy can make the circuit reduce excessively 13% LUT number in application of Logic synthesis algorithm.
  • WANG Zheng, YOU Ming-yu, LIU Jia-ming, LI Guo-zheng
    Computer Engineering. 2014, 40(6): 281-284,290. https://doi.org/10.3969/j.issn.1000-3428.2014.06.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cough contains a lot of pathological information, which is helpful for clinical diagnosis. Automatic cough detection method is helpful to improve the efficiency and reliability of this task, and it can reduce the artificial workload. When implementing the method, the amount of cough signals is much less than that of other signals in the collected corpus. Therefore, automatic cough detection in audio recordings is a typical class imbalance problem. Aiming at this problem, this paper proposes a novel imbalance classification method named APLSCX for the detection of cough signals. It uses the ability of asymmetric Partial Least Squares(PLS) classifier for processing class imbalance data, to extract feature of the normalized feature vector. At the same time, it adjusts classification plane based on the variance of low dimensional data. Experimental results show that APLSCX can increase the detection rate of cough while keeping the false alarm rate at a low level. Compared with Leicester Cough Monitor(LCM) and Support Vector Machine(SVM) method, it has higher detection rate and lower false alarm rate, and is more suitable for detecting cough signals in audio recording.
  • XUE Jiao, SUN Peng, DENG Feng, WANG Jing-lin
    Computer Engineering. 2014, 40(6): 285-290. https://doi.org/10.3969/j.issn.1000-3428.2014.06.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The restrictions and constraints of traditional remote control for users reduce the quality of user experience. Aiming at this defect, a system of gesture remote control based on touch screen is proposed. Based on the analysis of touch gesture meta-actions, classification and mathematical modeling are performed for touch gestures, and recognition algorithms of touch gestures are designed for the remote control system. Cognition and behavior differences among users are considered fully in the algorithms. A gesture remote control system for smart-TV is realized to collect operating habits for real users in the use of the remote control system, and further improve recognition accuracy for touch gestures and their corresponding remote operations. Experimental results show that the algorithm can distinguish touch gestures that may be misused easily, and make the average recognition accuracy reach 99%.
  • LU Shi-chang, YUAN Duo-ning, YANG Xiao-tao
    Computer Engineering. 2014, 40(6): 291-294. https://doi.org/10.3969/j.issn.1000-3428.2014.06.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For supply chain performance evaluation, an algorithm is proposed based on the Improved Harmony Search Algorithm(IHSA) combined with the Least Square Support Vector Machine(LSSVM). Studying the principle of Harmony Search Algorithm(HSA), an improved harmony search algorithm uses dynamic adjustment of pitch adjusted probability and pitch adjustment step. The global search ability of the algorithm is used to select LSSVM penalty factor and Gaussian kernel function radius. Combined with a supply chain performance evaluation example, it builds supply chain assessment model. Simulation results show that with the existing BP neural network algorithm and LSSVM peer evaluation to quantify, it is shown that IHS_LSSVM has a smaller prediction error and higher prediction accuracy.
  • DOU Huan, JIA Ke-bin
    Computer Engineering. 2014, 40(6): 295-299. https://doi.org/10.3969/j.issn.1000-3428.2014.06.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The original Multi-view Video Coding(MVC) mode selection algorithm utilizes full mode search method to find the best prediction mode. It can achieve good performance but also brings the high complexity. This work analyzes the relationship between Macro Block(MB) mode and depth values in Multi-view Video plus Depth(MVD), and proposes a mode selection algorithm for MVC based on depth information. The depth values are utilized to divide the 3D space into remote, close and midrange areas. Each area is handled respectively. For the midrange area which is the most complicated one, the most frequently used mode of the corresponding MB and its surrounding MB in reference view are used as the Most Likely used Mode(MLM) to separate the MB which may use large partition mode. The dynamic depth flatness combined with the motion information method is used to determine the final MB mode. Experimental results show that the proposed algorithm can save 71.70% search points in average compared with full search algorithm, which remarkably reduces the complexity of the mode decision process while maintaining nearly the same rate-distortion performance.
  • TANG Xu-long, AN Hong, FAN Dong-rui
    Computer Engineering. 2014, 40(6): 300-305. https://doi.org/10.3969/j.issn.1000-3428.2014.06.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As popular network video conference and high definition video on demand application is widely used, which puts forward higher requirements for video codec code quality and coding speed. In order to help hardware designers to build this kind of customized processors and evaluate their reasonability, this paper uses top-down analysis to solve these questions. It selects typical codec according to the popularity, coding efficiency, compression quality, and source accessibility. Hotspots of transformation, quantization and loop filter are extracted from coding process as kernels. These kernels are used to investigate the nature of the workloads. To better understand intrinsic characteristics of video coding application, it analyzes both computational and memory characteristics, and further provides insights into architectural design which can improve the performance of this kind of applications. Results show that this benchmark suite uses 10% codes to represent the main characteristics of video coding applications, and it is instructive and meaningful to processor design.
  • SUN Hai-long, WANG Ni-hong
    Computer Engineering. 2014, 40(6): 306-311. https://doi.org/10.3969/j.issn.1000-3428.2014.06.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most of existing methods for Continuous K Nearest Neighbors(CKNN) query of moving objects in road networks model the road networks as road segments and nodes, and convert them into directional graph or undirectional graph in memory. There are two problems with this model. First, the number of road segments is so huge that there are too many branches in index structure and frequent update of moving objects. Second, traffic regulations like the crossroads turn and U turn cannot be processed in graph model. This paper proposes a CKNN query algorithm of moving objects based on RRN-Tree in road networks, which includes the design of index structure and query algorithm for moving objects, models road network as routes, and implements CKNN query in road networks under complicated road conditions by expanding the road edges. Experimental results show that the method based on RRN-Tree has better query performance than classical IMA/GMA algorithm under various network density and objects distribution density, and the performance is increased by 1.5~2.13 times.
  • SUN Yong-li, LI Dong, ZHANG Yue
    Computer Engineering. 2014, 40(6): 312-316. https://doi.org/10.3969/j.issn.1000-3428.2014.06.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the identification, discovery and monitoring of public opinion of hot topic in network forums, a discovery method is presented based on the hot topic of the heat entropy. Firstly, it gets the data in the online forum with web crawler and reasonably defines the heat entropy of topic and the weight of each attribute on the basis of the data pre-processing and analysis of hot topics attributes. Secondly, it detects and tracks a hot topic in online forums according to the information about the analysis, statistics and evaluation of the hot topic. Finally, it calculates the accuracy of the different types with the topic of division and various types of definitions. Experimental results show that the policy is reasonable and effective compared with the traditional topic semantic analysis methods, so it can be the basis of the Internet forum of public opinion monitoring.
  • LIU Chang, GUO Yang
    Computer Engineering. 2014, 40(6): 317-320. https://doi.org/10.3969/j.issn.1000-3428.2014.06.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The traditional directed test is not efficient and easy to miss the boundary cases, and the scalability and the portability of test platform is poor. Aiming at these problems, by using SystemVerilog object-oriented features, constrained-random solving mechanism and coverage mechanism, this paper puts forward a fast method to build a coverage-driven random test platform. It uses object-oriented method to model the instruction set. It defines functional coverage points and cross-coverage points, and describes random constraint rules. It uses SystemVerilog constraint solving mechanism to generate a large number of test scripts. The verification for instruction set of “YHFT” DSP chip shows that the registers and data path coverage are improved by 50%, the operand coverage is improved by more than 90%, and the cross coverage is improved by more than 75% compared with the directed test. And the function coverage can reach the expected value in short cycles which improves the coverage rate, and shorten the verification cycles.