Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 May 2014, Volume 40 Issue 5
    

  • Select all
    |
  • DI Liang, DU Yong-ping
    Computer Engineering. 2014, 40(5): 1-6,11. https://doi.org/10.3969/j.issn.1000-3428.2014.05.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Latent Dirichlet Allocation(LDA) model can be used for identifying topic information from large-scale document set, but the effect is not ideal for short text such as microblog. This paper proposes a microblog user model based on LDA, which divides microblog based on user and represents each user with their posted microbolgs. Thus, the standard three layers in LDA model by document-topic-word becomes a user model by user-topic-word. The model is applied to user recommendation. Experiment on real data set shows that the new provided method has a better effect. With a proper topic number, the performance is improved by nearly 10%.
  • WANG Sha, ZHANG Lian-ming
    Computer Engineering. 2014, 40(5): 7-11. https://doi.org/10.3969/j.issn.1000-3428.2014.05.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the widespread use of microblog business and the impact on data mining techniques, a mining algorithm of microblog interpersonal relationship network is proposed based on the fuzzy matching of tag, and the characteristics of the network are analyzed. Use the tag of the users, the algorithm mainly considers word morpheme, order, and word length to calculate the match degree of the words when matching the tag. For weakening the influence that using different users as a starting point may have different result, ordinary users and celebrities as a starting point separately are used. At the same time, the structural characteristics of the network are studied, and the analysis results show that the network has small-world and scale-free properties. The results show that the mining error rate of celebrities and common users friends who are interested in IT. When mining 10 celebrity users’ friends, the average error rate of the algorithm is 14.08%, and 10.63% for common users.
  • LU Ti-guang, LIU Xin, LIU Ren-ren
    Computer Engineering. 2014, 40(5): 12-16,20. https://doi.org/10.3969/j.issn.1000-3428.2014.05.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Currently, Web crawler and microblog API which are used to grab data from the microblog are difficult to satisfy the public opinion system demands for microblog data. To settle the problem, this paper presents a feasible solution which is the similar as the browser login microblog to capture data from Web pages. It can easily get all data from any microblog users. On this basis, it constructs a microblogging network through interconnections among users, and discovers new users through it. In order to get high quality data, it builds mathematical models to calculate the user’s influence index by using posting number, posting frequency, fans number, forwarding number and comments number. Moreover, it builds priority queue according to the calculated influence factor, which let those that have bigger influence index have high acquisition frequency. Finally, it calculates time interval to balance the lower frequency of non-active microblog user. The experimental results show that this method not only processes easily and has higher speed but also can obtain high quality information and have huge versatility.
  • GAO Jun-bo, MEI Bo
    Computer Engineering. 2014, 40(5): 17-20. https://doi.org/10.3969/j.issn.1000-3428.2014.05.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem of a large number of advertisements on Sina, Tencent microblog platform, this paper proposes a microblog advertisement filtering model. Through the data pretreatment, the raw data are converted into clean data and easy to be handled by the computer. In the pretreatment stage, according to the characteristics of the microblog, this paper emphatically improves the stop word list, and it plays a key role in improving precision. Then it builds a classifier based on support vector machine for training data, and through continuous learning and feedback, better classification results are achieved. Experimental results show that the model of advertisement filter achieves better effect, when filtering accuracy is more than 90%, which is better than the method based on keywords.
  • GAI Wei-lin, XIN Dan, WANG Lu, LIU Xin, HU Jian-bin
    Computer Engineering. 2014, 40(5): 21-25,30. https://doi.org/10.3969/j.issn.1000-3428.2014.05.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the research of cyberspace situation awareness, how to deal with uncertain, inaccurate multi-source heterogeneous information is an important problem which needs to be solved in the process of situational understanding. In order to accurately handle with the information, improve the awareness of the situation, make the situation more accuracy, timeliness and overall, the paper reviews the existing technology focus, mainly including data fusion methods and decision-making methods. Data fusion methods mainly includes Bayesian network, D-S evidence theory, rough set theory, neural network, hidden Markov model and Markov game theory methods, and decision-making mainly includes cognitive psychology, logic and risk management methods. Research results show that current technology focuses present diversity, but still has great space for improvement in both the situation generation application and verification.
  • LING Xiao-ming, HAO Yu-sheng
    Computer Engineering. 2014, 40(5): 26-30. https://doi.org/10.3969/j.issn.1000-3428.2014.05.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem of storing process history data and fastly querying large amounts, this paper proposes a disk history database model based on relational database. In the storage design, static information about the tags and data collect interfaces are stored in relational database. History data is stored in files, and a mechanism named triple secondary cache is used in RAM, thus the frequency of disk access is reduced. Meanwhile, the SDT algorithm is also used in data processing to reduce storage cost. In order to improve the efficiency of query, three-level index file structure which consists of a total index file, secondary index file and tag number index file is adopted in the data query scheme. The first version of the model is implemented. Applications result shows that the storage and query scheme is reasonable, and it returns the results of 100 tag numbers in about 500 ms.
  • WANG Ji-kui, LI Shao-bo
    Computer Engineering. 2014, 40(5): 31-35,40. https://doi.org/10.3969/j.issn.1000-3428.2014.05.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To avoid the effect of duplicate master data from multiple business systems on the quality, synchronization of the master data as well as master data mining, this paper propose a fastCdrDetection(Fast Cluster Duplicate Records Detection) algorithm, in which a duplicate master data detection model and a credible record generating algorithm are included, considering data source reliability, data refresh time and data length. A non-recursive algorithm FiledMatch is established for character string similarity calculation. Aiming at the eliminating problems caused by abbreviations and wrong spellings in Chinese input, a sourceKeys algorithm is constructed for pretreatment of duplicate records arising from a same business system and sharing same business keys to achieve high efficiency in duplicate master data detection. Experiments are carried on a power grid with 630 thousand records of raw material and 230 thousand simulated data records. Result shows that the recall rate of the fastCdrDetection algorithm is 88%, while the PQS algorithm is 74%, and the accuracy is 95% to 61%. The effectiveness of the algorithm is verified.
  • SONG Jia, XU Li, SUN Hong
    Computer Engineering. 2014, 40(5): 36-40. https://doi.org/10.3969/j.issn.1000-3428.2014.05.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Clustering is an effective and practical method to mine the huge amount of DNA microarray data to gain important genetic and biological information. However, most traditional clustering algorithms can only provide a single clustering result, and are unable to identify distinct sets of genes with similar expression patterns. This paper presents an algorithm that can cluster DNA microarray data with a graph theory based algorithm. In particular, a DNA microarray dataset is represented by a graph whose edges are weighted, then an algorithm which can compute the minimum weighted and second minimum weighted graph cuts is applied to the graph respectively. Test results show that this approach can achieve improved clustering accuracy, compared with other clustering methods such as Fuzzy-Max, Fuzzy-Alpha, Fuzzy-Clust.
  • MENG Xiao-hua, HUANG Cong-shan, ZHU Li-sha
    Computer Engineering. 2014, 40(5): 41-44,48. https://doi.org/10.3969/j.issn.1000-3428.2014.05.009
    Abstract ( )   Knowledge map   Save
    For real applications processing large volume of particles in one-dimensional heat conduction problem, the response time of CPU serial algorithm and MPI parallel algorithm is too long. Considering Graphic Processing Unit(GPU) offers powerful parallel processing capabilities, it implements a GPU parallel heat conduction algorithm on Compute Unified Device Architecture(CUDA) parallel programming environment using CPU and GPU collaborative mode. The algorithm sets the block and grid size based on GPU hardware configuration. Particles are divided into a plurality of blocks, the particle is into the GPU graphics for parallel computing, and one thread performs a calculation of a particle. It retrieves the processed data to CPU main memory and calculates the average heat flow of each particle. Experimental results show that, compared with CPU serial algorithm, GPU parallel algorithm has a great advantage in time efficiency, the speedup is close to 900, and speedup can improve as the particle number size increases.
  • ZHU Yan-jun, WU Xiang-yang
    Computer Engineering. 2014, 40(5): 45-48. https://doi.org/10.3969/j.issn.1000-3428.2014.05.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    On the multi-dimensional data analysis and processing, data with missing or unknown values is ubiquitous. How to use the potential structure of the known data to reconstruct the missing data is an urgent problem to be solved. Previously, the missing data filling mostly aims at low-dimensional data in matrix or vector format, while research on high-dimensional data above 3D is very few. To solve this problem, this paper proposes a multi-dimensional data filling algorithm based on tensor decomposition, adequately using tensor decomposition’s structure and uniqueness of CP model, to realize the multi-dimensional data filling effectively. Filling image with missing data stored in 3D format by experiment and comparison with CP-WOPT algorithm, it proves that this algorithm is not only accurate but also rapid.
  • SONG Ya-nan, ZHONG Qian, QU Guang-liang, LI Xing-li
    Computer Engineering. 2014, 40(5): 49-53,58. https://doi.org/10.3969/j.issn.1000-3428.2014.05.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most utility based wireless resource allocation methods only focus on the optimal allocation within a base station. Because of ignoring the issue of base station selection, these methods can not achieve the optimal utility of whole network. Therefore, this paper proposes a utility based multi-base-station cooperation wireless resource allocation method, which is divided into two stages. In the first stage, base stations are selected according to their congestion degree. And in the second stage, the resource in each base station is allocated according to marginal utility. The simulation experiments show that, between the approach and the baseline method, of all the 268 base station selections there are 218 accounting for 81.3% having the same selection result and the average elapse time of the proposed method is only 0.066 s, which is much lower than the baseline method 0.926 s. These facts show the rationality of the base station selection method as well as the efficiency and effectiveness of the resource allocation method.
  • TANG Chao-wei, SHI Hao, ZHOU Xu, BAI Fan, ZHAO Zhi-feng, YAN Ming
    Computer Engineering. 2014, 40(5): 54-58. https://doi.org/10.3969/j.issn.1000-3428.2014.05.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The rise of the mobile Internet and tri-networks integration brings a huge challenge to select resource peer for the requesting peer services in the Peer to Peer(P2P) streaming system. The video of the selected peer does not match terminal capabilities of the requesting peer because of network bandwidth and terminal heterogeneity. To solve this problem, a peer selection algorithm based on Scalable Video Coding(SVC) layering matching degree is proposed. Under the background of heterogeneous network, this algorithm combines the SVC layering layers and network access types, with a comprehensive consideration of peer bandwidth and link round-trip delay. Experimental results show that compared with the random selection algorithm, this algorithm contributes to a higher degree of matching SVC video layer of the selected peer with the terminal handling capacity of the requesting peer(the corresponding matching degrees of fixed network, wireless, 3G access network are 77.8, 72.2, and 88.9) and a higher terminal average service(PC is 110, notebook is 78.3, and mobile is 38.3).
  • LV Lin-tao, HU Lei-lei, YANG Yu-xiang, TAN Fang
    Computer Engineering. 2014, 40(5): 59-61,67. https://doi.org/10.3969/j.issn.1000-3428.2014.05.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a safe and low energy consumption clustering routing protocol S-LEACH to solve existing and classical adaptive routing protocol LEACH in the network survival period and the deficiency of the security. It evaluates each node of the detected environment based on trust model from three aspects of node data, communication bandwidth and residual energy. It sets up the node credibility collection to select the head according to threshold value, and uses the firefly algorithm to simulator clustering. The base station node communicates with multiple hops routing algorithms to reduce the additional energy consumption. Experimental results show that S-LEACH’s life cycle in the network prolongs more than four times compared with LEACH and is increased by 2.3% compared with BTSR protocol in the untrusted node detection.
  • LIU Tao, CHENG Dong-nian, TIAN Ming
    Computer Engineering. 2014, 40(5): 62-67. https://doi.org/10.3969/j.issn.1000-3428.2014.05.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Content Centric Network(CCN) is a novel paradigm for content distribution with name-based routing. However, the basic CCN routing mechanism only creates routing entries to server contents and lacks routing to cache contents on nodes, leading to low cache resource utilization and large content access latency. To solve this problem, a Short-cut Routing(SCR) is presented, based on the notification of cache content replicas, which enables nodes to perceive neighbor’s cache information and retrieve contents from the best routing content source. Simulation results show that the SCR significantly reduces the average delay compared to the basic routing mechanism which does not take into account routing to cache contents, gives the cache capacity of the 60 content objects, and SCR reduces the server load by 43%.
  • YUAN Bo, ZHAO Dan-feng, QIAN Jin-xi
    Computer Engineering. 2014, 40(5): 68-72. https://doi.org/10.3969/j.issn.1000-3428.2014.05.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the problems that the redundant encoded data packets of fountain code are big and require large memory space, resulting in poor real-time Wireless Sensor Network(WSN) problems. A system of average framing length of Luby Transform(LT) codes split encoding and decoding is designed. The typical topology model is built, the cascade form of the network coding and fountain codes in data transmission is applied, and the improvement coding compression algorithm in the average framing length LT code generator matrix is introduced. The weighted average method and the multi-bit packaging method are introduced in the hierarchy of WSN, which greatly reduces the amount of storage redundancy without damaging the characteristic of fountain codes. Experimental results show that the system makes the reduction amount of the compression ratio of the storage redundancy in the WSN to 103, promotes the encoding rate and decoding rate in the WSN and improves the recovery rate of the data center.
  • CHENG Feng, FENG Dong-qin, CHU Jian
    Computer Engineering. 2014, 40(5): 73-80. https://doi.org/10.3969/j.issn.1000-3428.2014.05.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For industrial wireless network reliability of data communication, certainty and real-time requirements, this paper proposes a reliable and real-time routing algorithm based on EPA. The algorithm achieves disjoint multipath routing using neighbor list based on short address assignment and periodic synchronization network packets. Considering the link quality and remaining transmission time, the algorithm can select the real-time path based on the shortest path diffusion mechanism for reducing transmission delay due to link failure, it also provides a link failure processing method and a network loop detection method based on forwarding record table and blacklist mechanism to ensure the reliability of data transmission and improve bandwidth utilization performance, test result shows that this algorithm can guarantee the data receiving ratio about 99%, decrease average transmission delay by 30%, which ensures the reliability of data transmission and real-time performance.
  • XIE Dai-jun, KONG Fan-zeng, HU Han-ying
    Computer Engineering. 2014, 40(5): 81-85. https://doi.org/10.3969/j.issn.1000-3428.2014.05.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Received Signal Strength(RSS) is different when measured by different terminal hardware, which causes poor robustness for the traditional fingerprint RSS. Using the application of Signal Strength Difference(SSD) for indoor reference, a robust SSD location fingerprint is proposed to resolve the problem in this paper. It analyzes the robustness of SSD, RSS and Hyperbolic Location Fingerprint(HLF) theoretically, and with the traditional localization K-Nearest Neighbor(KNN) in an actual Wireless Local Area Network(WLAN) environment, it carries out experiments on the 3 fingerprints, with a same terminal and different terminals, in the training phase and positioning phase. Experimental results show that, compared with RSS and HLF, SSD’s robustness is better on against mobile terminal heterogeneity.
  • TIAN Xin-ji, JIANG Li-min
    Computer Engineering. 2014, 40(5): 86-88. https://doi.org/10.3969/j.issn.1000-3428.2014.05.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the problem of co-channel interference over Multi-Input Multi-Output(MIMO) Multiple Access Channels(MAC) for two-user, an interference cancellation scheme is proposed, in which the transmitted signals are diagonalization processed according to feedback information. Through proper design of pretreatment matrices, the co-channel interference is eliminated after the linear processing of received signals. Not only the reliability is improved, but also each signal can be Maximum Likelihood(ML) decoded separately. Simulation results show that, with two antennas at the receiver and 4QAM adopted, the gain of the proposed scheme is 2 dB at the Bit Error Rate(BER) of 10?3 compared with the existing interference cancellation scheme.
  • LI Yu-min, YU Ji-guo, WAN Sheng-li
    Computer Engineering. 2014, 40(5): 89-93. https://doi.org/10.3969/j.issn.1000-3428.2014.05.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Topology control is an important issue in Wireless Sensor Network(WSN) research. Most of the existing work on topology control focus on how to reduce energy consumption, but they do not consider the effects of interference. This paper proposes the topology control algorithm PLTCA under physical Signal to Interference plus Noise Ratio(SINR) model, with the objective of maximizing network capacity problem. It constructs topology through computing forward and backward list within three or fewer hops. In PLTCA, power control technology is used and each node can choose their own neighbors by changing the direction of the transmission or the transmission power, so as to control network topology structure. Theoretical analysis shows the connectivity of the links. This paper proposes a centralized approach, called PLTCA. Simulation results show that the algorithm can guarantee the network connectivity and decrease the energy dissipation of the network. PLTCA is shown to be superior to the MaxSR algorithm by 10%~20% on the average energy loss of links.
  • LI Kun-li, ZHANG Da-fang, GUAN Hong-tao, XIE Gao-gang
    Computer Engineering. 2014, 40(5): 94-98,102. https://doi.org/10.3969/j.issn.1000-3428.2014.05.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to adapt to the development of the demand for future network, the researchers pay urgent attention to the next generation network architecture. As the core equipment of the virtual network platform construction, which is one type of future network architectures, the design and implementation of the virtual router system is the research hot spot. This paper presents a management and control plane for virtual router. It introduces the design and implementation of virtual router control plan, analysis functional modules and key technology. The paper makes solution which combines container-based virtualization technology, C language, shell script, Netlink protocol, Quagga routing management software and other technology to realize the plane in the Linux operation environment. System test results show that the control plane can allocate physical device resources effectively, and isolate virtual router generation and management efficiently, manage network information and realize the interaction with the forwarding plane. Based on high performance, system design makes the control level of system flexibility, portability and extensibility.
  • LI Hong-yu, FU Dong-lai
    Computer Engineering. 2014, 40(5): 99-102. https://doi.org/10.3969/j.issn.1000-3428.2014.05.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve efficiency, privacy protecting and scalability of remote attestation, a new method to measure the integrity of trusted entities is proposed. The method based on Remote Attestation based on Merkle Hash Tree(RAMT) takes the frequency of trusted entities into account. It leverages multiple techniques including group signatures and dynamic Huffman algorithms. Thus, it reduces dramatically storage space to store measurement log of executables and hides information of specific software and cuts down a length of the path of verification. These algorithms including software distribution, integrity measurement and verification are given and their advantages are described from three aspects including verification efficiency, privacy protection and scalability. Analysis shows the ability of the protection privacy is enhanced. The efficiency and the scalability of the remote attestation are improved highly.
  • TAN Rui-neng, LU Yuan-yuan, TIAN Jiao-ling
    Computer Engineering. 2014, 40(5): 103-108,114. https://doi.org/10.3969/j.issn.1000-3428.2014.05.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    SM4 is the first bloc cipher published in the year of 2006 by the government of China. In order to resist Side-channel Attack (SCA) such as power analysis and electromagnetic radiation, a multi-path multiplicative masking method is proposed for SM4 algorithm to improve the security of SM4 algorithm. Through multi data paths, and transform S box by multiplicative inversion in the finite field when the random number is joined, which makes all intermediate variables among the proposed SM4 scheme different from that of the standard method. It not only realizes the cover of all the key information in encryption process, but also enhances the difficulties of SCA. Through compared with the traditional algorithm and the existing schemes, the experimental results show that the mask scheme can weaken the correlation between the energy consumption characteristics and the operating of the intermediate data effectively without increasing much power and hardware resources. Thus the proposed method bears all kinds of side-channel attacks and the security of the new SM4 is improved.
  • PENG Fei, ZENG Xue-wen, DENG Hao-jiang, LIU Lei
    Computer Engineering. 2014, 40(5): 109-114. https://doi.org/10.3969/j.issn.1000-3428.2014.05.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem that existing recommender systems based on collaborative filtering are vulnerable to the shilling attack, this paper proposes an Unsupervised Detection Algorithm of Shilling Attack Based on Feature Subset(UnDSA-FS). A feature named Kurtosis Coefficient of Interest(KCI) is proposed to describe the intensity degree of user’s interest. Taking the KCI and other existed features as candidate feature set, this algorithm uses unsupervised feature selection method to choose proper feature subset for different attack strategies. It computes the distance sum of each user, sorts the users by the distance sum and identifies the attack target. It sets a sliding window on the sorted user sequence, and filters the attack users by calculating the mean rating deviation of attack target. Experimental result verifies that the information gain of KCI is higher than existing features’, and the proposed UnDSA-FS has a better performance in stability and precision compared with existing unsupervised detection methods.
  • SHAO Xiu-li, JIANG Hong-ling, GENG Mei-jie, LI Yao-fang
    Computer Engineering. 2014, 40(5): 115-119. https://doi.org/10.3969/j.issn.1000-3428.2014.05.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing botnet detection methods generally have large amount of computation, which results in low detection efficiency. Cloud computing provides new ideas and solutions for the detection of botnets because of its power capacity of data processing and analysis capabilities. Therefore, this paper designs and implements a parallel botnet detection algorithm based on MapReduce model, which uses cloud collaboration and flow correlation relation to detect botnets. It extracts the relationship between flows, gathers the flows having relationship, and calculates the scores of hosts. The hosts whose score is greater than a threshold are suspicious bots. Experimental results show that this algorithm is effective for detecting botnet. The detection rate of P2P botnet can reach more than 90%, and the false alarm rate belows 4%. With the cloud server-side computing nodes increasing, the process of cloud client to upload data and botnet detection is more efficient.
  • LIU Yu, XUE Kai-ping
    Computer Engineering. 2014, 40(5): 120-123. https://doi.org/10.3969/j.issn.1000-3428.2014.05.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Electronic auction is online realization of traditional actions. Due to its privacy protection and security, sealed-bid auction scheme attracts widespread attention. However, most of these auction schemes are based on the assumption of existing trusted third party, which is often difficult to be established in fact. Based on LaGrange threshold secret sharing scheme and BIT comment mechanism, a distributed electronic auction scheme with multiple servers is proposed in this paper. In the bidding phase, based on LaGrange threshold secret sharing scheme, the bidder computes fragmentations of the bidding result and separately gives them to different auction servers. In the opening phase, no less than a certain threshold of servers submit their fragmentations. The final success bidder can be verified by BIT commit based method. It not only prevents a single point of bottleneck of a single auction server, but also cuts down auction process computational overhead. The scheme ensures the protection of users’ privacy, only the identity of the final successful bidder and the relative bid price can be revealed. Analysis results of the security and performance show that it satisfies the requirements of a secure electronic auction scheme. Meanwhile, it can reduce the computation and communication overhead.
  • WANG Jun, JI Chang-peng, WANG Yang, WANG Lian-peng
    Computer Engineering. 2014, 40(5): 124-128. https://doi.org/10.3969/j.issn.1000-3428.2014.05.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the rapid development of Peer-to-Peer(P2P) network in recent years, there exist a lot of unsafe service problems. Aiming at this problem, a trust mechanism based on cloud model is proposed. A kind of cloud model algorithm based on node’s trust vector is proposed. For several nodes which matching the query condition, it uses gray theory to predict trust vector of current cycle based on its previous trust vectors, and creates cloud model for several current trust vectors, selects the trustworthy node to service the source query node depending on the decision algorithm of trust-cloud and the similarity measure algorithm of trust-cloud. Simulation experimental result shows that when selecting the target service nodes, this mechanism can not only depend on node’s mean trust-value, but also take trust-value discreteness into considering. So it can select the most trust node from different angles, improve the quality of network service, and achieve the purpose of enhancing network security.
  • DENG Sheng-yuan, LU Jian-zhu, YANG Jing-jing, CHEN Ting
    Computer Engineering. 2014, 40(5): 129-133. https://doi.org/10.3969/j.issn.1000-3428.2014.05.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the security and fairness of access control in Wireless Sensor Network(WSN) and introduce role theory, this paper proposes an improved role-based access control scheme in WSN. Combining the role authorization with the smart card authentication model, this scheme improves the security and the fairness of the session key as well as the flexibility and reusability of system permissions management. Mutual authentication is provided for two parties to detect and reject incorrect or incomplete exchange information. Through theoretical analysis and evaluation, it proves that this scheme reduces 384 bit in communication costs and has a reasonable calculation cost as well as analogical storage cost compared with Das scheme, and it is more secure.
  • WANG Ya, XIONG Yan, GONG Xu-dong, LU Qi-wei
    Computer Engineering. 2014, 40(5): 134-138. https://doi.org/10.3969/j.issn.1000-3428.2014.05.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Mobile Ad hoc Network(MANET) is a wireless Ad hoc network, and it is vulnerable to be attacked by inside malicious nodes. For the complexity of inside attack behavior, the malicious nodes are difficult to be identified. In order to solve this problem, this paper presents a method of identifying inside malicious nodes based on fuzzy mathematics. By analyzing the node’s communication behavior, it finds an attribute vector which consists of node’s average packet forwarding delay, forwarding ratio and packet loss ratio, then classifies it using the principle of maximum membership grade. Experiment simulates on the NS2 software, and sets different simulation scenarios and malicious node density. The simulation results show that the moving speed of nodes has little impact on the recognition results, while the malicious density has larger impact. Even the malicious nodes are rather dense, reaching 30%, a high recognition ratio still maintains more 96%, and the false recognition ratio is less 5%.
  • PENG Jing-yu, ZHAO He-ming
    Computer Engineering. 2014, 40(5): 139-143. https://doi.org/10.3969/j.issn.1000-3428.2014.05.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the insufficiency of traditional encryption method in color image compression, a nassociation algorithm of color image encryption and compression is proposed for secure communication. A hybrid chaotic system is designed to scramble the secret image in time domain. Chaos system is also used to generate a transformation matrix for color images to confusion transform, which is used to change the carrier image pixels. Every pixel’s code value of secret image is one-to-one corresponded to the best pixel’s coordinates, which is searched in the carrier image’s setting area according to the principle of minimum Euclidean distance. After compression coding, the secret image data is no longer the traditional pixel value, but a group of code value corresponding the serial number or subscript. When 67 percent of the original image is compressed, the similarity between reconstructed image and original image is still more than 95%. The key space, sensitivity of the key and statistical features of encrypted image are also analyzed by simulation experiment. Simulation results show that the compression coding method has high security, large compression ratio, and is a kind of effective and easy to realize encryption algorithm.
  • PANG Xi-yu, WANG Cheng, TONG Chun-ling
    Computer Engineering. 2014, 40(5): 144-148. https://doi.org/10.3969/j.issn.1000-3428.2014.05.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The access control requirements of Web application system and the shortcomings in Web application system with Role-based Access Control(RBAC) model are analyzed, a fundamental idea of access control based on role-function model is proposed and its implementation details are discussed. Based on naturally formed Web page organization structure according to the business function requirements of the system and access control requirements of users, business functions of pages are partitioned in bottom menu in order to form the basic unit of permissions configuration. Through configuring the relation between user, role, page, menu, function to control user access to system resources such as Web page, the html element and operation in the page. Through the practical application of scientific research management system in Shandong Jiaotong University, application shows that implementation of access control in the page and menu to achieve business function, can well meet the enterprise requirements for user access control of Web system. It has the advantages of simple operation, strong versatility, and effectively reduces the workload of Web system development.
  • DUAN Guo-yun, CHEN Hao, HUANG Wen, TANG Ya-chun
    Computer Engineering. 2014, 40(5): 149-153. https://doi.org/10.3969/j.issn.1000-3428.2014.05.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Commonly used Web servers lack page integrity protection mechanisms, which makes the target website exploitable to potential attackers. To ensure the integrity of the website and prevent users from visiting the tampered page, this paper proposes a tamper-proof mechanism, based on the technique of file content hash. By calculating the target file fingerprints, adopting the snapshot technology to recover the tampered files, the system can provide protection for dynamic websites to recover from failure or targeted attacks efficiently. This paper presents the design and implementation of a tamper-proof system for Web applications in detail. Experimental results show that compared with existing systems, the system can implement the tamper protection and the snapshot recovery effectively, and imposes little runtime cost on the server being protected.
  • XUN Zhong-kai, HUANG Hao, JIN Yin-cheng
    Computer Engineering. 2014, 40(5): 154-157. https://doi.org/10.3969/j.issn.1000-3428.2014.05.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low performance caused by frequent switching between user mode and kernel mode, multiple copies of data between the virtual domains through virtual network data transmission, this paper proposes a high performance virtual machine firewall, and it adopts the network packet filtering and high performance of SR-IOV to make virtual domain directly interact with the real network card. Aiming at the problem of vulnerable attack for a lower privilege level virtual domain firewall, it takes higher privilege level of Xen to real-time monitor the virtual machine firewall module and protect it from illegally accessing. Experimental results show that the deployment of SR-IOV network card in the virtual machine firewall makes the network I/O performance increase by 1 time compared with the Xen network I/O assess mode. The deployment of the monitor module in Xen can successfully prevent the firewall from unauthorized access and malicious tampering, and ensure the safety of the firewall.
  • CHEN Qun, YANG Dong-yong, LU Jin
    Computer Engineering. 2014, 40(5): 158-163. https://doi.org/10.3969/j.issn.1000-3428.2014.05.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that video speed measurement has a large error under bayonet socket environment, a method to improve the velocity precision is proposed. The second character of license plate as a feature of vehicle positioning block is used, wavelet decomposition is used to decompose the characteristic curve of the characters outer boundary, the feature matching is made and the offset of the curve about the template is obtained at low resolution, the outer boundary is adjusted in the initial resolution to complete the vehicle precise location. The actual coordinate of the block is determined combined with the height fixed of character, improves the distance of the vehicle traveling. In actual environment, it tests the algorithm for longer than two hours, and compared with coil speed and video speed measurement by license plate location. Results show that, under normal speed, the error is within 3 km/h compared with the coil speed.
  • FENG Zhen, GUO He, WANG Yu-xin, JIA Qi, HOU Guang-feng
    Computer Engineering. 2014, 40(5): 164-167. https://doi.org/10.3969/j.issn.1000-3428.2014.05.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of slow sampling time in Magnetic Resonance Imaging(MRI), a new Compressed Sensing(CS) method is proposed. Singular Value Decomposition(SVD)-based sparse representation is an effective but not widely studied method in the CS-MRI field. This sparse representation is improved using the partially known signal support method. A hybrid support detection method is proposed to make use both the position and magnitude knowledge of the sparse signals. This hybrid support detection method is further applied in Fast Composite Splitting Algorithm(FCSA), which is an effective reconstruction algorithm for CS-MRI problem. Experimental results show that the proposed FCSA algorithm outperforms the FCSA with Wavelet method and the FCSA with SVD method in the reconstructed image qualities, its PSNR is 2.21 dB~12.72 dB higher than the FCSA with Wavelet method, 0.87 dB~2.05 dB higher than the FCSA with SVD method, and the reconstruction time is 36.91 s compared with 103.21 s of the FCSA with Wavelet method.
  • LIU Li-qun, WANG Lian-guo, HUO Jiu-yuan, HAN Jun-ying, LIU Cheng-zhong
    Computer Engineering. 2014, 40(5): 168-172. https://doi.org/10.3969/j.issn.1000-3428.2014.05.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of slow convergence speed and low optimization precision of Shuffled Frog Leaping Algorithm (SFLA) in solving complex problems, a Shuffled Frog Leaping Algorithm Based on Fuzzy Threshold Compensation(FTCSFLA) is proposed. The fuzzy grouping idea is introduced to divide different frogs into fuzzy groups, and disturbance strategy in a local search is improved based on the basic SFLA. Each fuzzy group is defined with a total membership threshold and a total compensation coefficient, and each frog is defined with a fuzzy membership, which is scaled with the distribution degree of neighborhood frogs. In a local search, the worst individual is updated by two methods in each group, which is partitioned according to the relation between fuzzy membership and membership threshold. In two methods, a compensation coefficient is set to give a unify expression. Experimental results show that the convergence precision and speed of FTCSFLA which membership threshold is 0.9 is better than SFLA and FTCSFLA which membership threshold is 0.5. The evolution curve shows that the convergence precision and speed of FTCSFLA is the optimum when its membership threshold is between (0.5, 0.9].
  • WANG Chong, LEI Xiu-juan
    Computer Engineering. 2014, 40(5): 173-177. https://doi.org/10.3969/j.issn.1000-3428.2014.05.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional partition clustering method has the problem of over-reliance on the initial cluster centers and the method is prone to fall into local optimum. So an improved partition clustering algorithm based on the firefly algorithm is proposed. The algorithm considers a firefly as a set of cluster centers and class cohesion is regarded as brightness of the firefly. Then find the optimal clustering center by the fireflies attracting each other. In the process of optimization, randomly distributed firefly population is used to overcome the problem of over-reliance on the initial cluster centers and adaptive step strategy is adopted to strengthen the ability to find the exact solution of the algorithm. In order to prevent the algorithm from local optimum for population concentration, the niche technology is introduced to improve the diversity of the fireflies’ population. Experimental results indicate that the algorithm is improved in clustering precision and stability compared with traditional clustering algorithm.
  • LU Xian-ling, WANG Hong-bin, WANG Ying-ying, XU Xian
    Computer Engineering. 2014, 40(5): 178-182. https://doi.org/10.3969/j.issn.1000-3428.2014.05.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Two novel features for acceleration data are applied to improve recognition accuracy of human activities. One feature uncovers the essential of acceleration direction by calculating the Wavelet Energy(WE) of angle between acceleration vector and gravity direction, and distinguishes different activities from time-frequency analysis. The other feature extracts from the slope of key points connection after acceleration data are rearranged, which highlights the difference and distribution of acceleration data. The two novel features can be combined with the six traditional widely used features to constitute feature sets, which allows to train the multi-class classifier based on Support Vector Machine(SVM), and to identify seven Activities of Daily Living(ADL). Two test results show that the average recognition accuracy of independent test method and leave one out cross test method can reach 92.70% and 95.08% respectively.
  • ZHAO Wen-liang, GUO Hua-ping, FAN Ming
    Computer Engineering. 2014, 40(5): 183-187,191. https://doi.org/10.3969/j.issn.1000-3428.2014.05.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a new Tri-Training algorithm based on feature transformation. It employs feature transformation to transform labeled instances into new space to obtain new training sets, and constructs accurate and diverse classifiers. In this way, it avoids the weakness of bootstrap sampling which only adopts training data samples to train base classifiers. In order to make full use of the data distribution information, this paper introduces a new transformation method called Transformation Based on Must-link Constrains and Cannot-link Constrains(TMC), and uses it to this new Tri-Training algorithm. Experimental results on UCI data sets show that, in different unlabeled rate, compared with the classic Co-Training and Tri-Training algorithm, the proposed algorithm based on feature transformation gets the highest accuracy in most data sets. In addition, compared with the Tri-LDA and Tri-CP algorithm, the Tri-Training algorithm based on TMC has better generalization ability.
  • LIU Lin, LIU San-ya, LIU Zhi, TIE Lu
    Computer Engineering. 2014, 40(5): 188-191. https://doi.org/10.3969/j.issn.1000-3428.2014.05.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For Bulletin Board System(BBS) sentiment classification issues, an improved Random Subspace Method(RSM) is proposed. This method tries to make full use of the discriminative information in the high dimensional feature space. In the process of generating subspaces, on the one hand, a weighting function is used to evaluate classification abilities of the features, and better ones are chosen to ensure accuracy of classification with a higher probability, on the other hand, the size of the subspace is enlarged, principal component analysis is used to reduce the dimension of the subspace, and they ensure the efficiency and diversity. Experimental results show that the proposed algorithm obtains the best accuracy of 91.3% , which is higher than the conventional Random Subspace Method(RSM).
  • SUN Bo-wen, QIU Zi-jian, SHEN Bin, ZHANG Yan-peng
    Computer Engineering. 2014, 40(5): 192-195,202. https://doi.org/10.3969/j.issn.1000-3428.2014.05.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Feature point matching is a key base of many computer vision problems. A large advantage of fern algorithm, simple and fast, is noticed among the existing algorithms, but classifier trained by fern is too large to low memory device, such as cellphones. For cutting down the size of a classifier, this paper proposes an improved version of fern, named Oriented Fern(OFern), which “normal” patches are done that training for a classifier, and features are extracted, a Naive Bayesian model is built to train a classifier. Experimental results show that compared with the traditional fern, OFern can save memory to 1/8~1/16 at the similar recognition rate, while it still keeps high speed for real-time applications.
  • QIN Tian-bao, PENG Jia-yao, SHA Mei
    Computer Engineering. 2014, 40(5): 196-202. https://doi.org/10.3969/j.issn.1000-3428.2014.05.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A Mixed Integer Programming(MIP) model and a Constraint Programming(CP) model to tackle the integrated quay crane and yard truck scheduling problem for inbound containers are proposed, which aims to minimize the makespan of unloading process. The CP model is developed with OPL modeling language and employs OPL’s special constructs designed for scheduling problems, e.g., interval variables sequence variable etc. To improve solving efficiency, a special concept called extended operation task is proposed which is used to define interval variables. Besides, a new lower bound is given to evaluate the quality of solutions. Computational experiments on varied scales of instances are carried out to test the CP model and the MIP model. The results indicate that the CP model does not outperform the MIP model for small instances. For medium and large instances, the MIP model can not be solved within time limit, whereas the CP model is effective for finding high-quality solutions and can efficiently solve large problems with fast convergence rate. On average, the gap between the objective values of the CP model and the lower bounds is 2.19%~8.28%.
  • SHEN Jia-jie, JIANG Hong, WANG Su
    Computer Engineering. 2014, 40(5): 203-208,215. https://doi.org/10.3969/j.issn.1000-3428.2014.05.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming to the problem of multi-objective Differential Evolution(DE) algorithms which have the characteristics of prematurity and slow convergence speed under high-dimensional situation, this paper proposes an improved multi-objective DE algorithms based on multi-mutation samples. Through using method of introducing multi-mutation individuals into the mutation operator and crossover operator of multi-objective DE algorithm, multi-objective DE algorithm populations can keep diversity, reduce the possibility of falling into local optimal solution, it has guick speed for optimal solution, and the improves the ability finding optimal solution using shorter iteration steps than standard multi-objective differential evolution algorithm. Experimental results show that compared with standarded multi-objective DE algorithms, the improved algorithm can find optimal value effectively in high-dimensional multi-objective environment.
  • DU Yuan-wei, YANG Na, SHI Fang-yuan
    Computer Engineering. 2014, 40(5): 209-215. https://doi.org/10.3969/j.issn.1000-3428.2014.05.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the hierarchical fused decision problem of remarkable hierarchical structure and ordered decision implementation that independent and interrelated relationships co-exist in subjective evidences given by decision makers, the fusion mechanism is proposed combined with characteristics of hierarchical decision making to wipe out the influence of supervisors’ subjective evidence being repeatedly synthesized and the fusion rule adapted to only two interrelated evidences is expanded for more interrelated evidences according to the rule of de-synthesizing. After that, decision making steps for double-layer subjective fusion decision with multiple sources are constructed by the suggested integration rule for fusing interrelated evidences and the traditional Dempster combination rule following sequences of “up to down” and “inner to outer”, in which hierarchical weights are reflected. A decision problem is resolved by three methods, and the data comparison analysis shows the proposed method is scientific and efficient finally.
  • HUANG Zhen-xiang, PENG Bo, WU Juan, WANG Ru-peng
    Computer Engineering. 2014, 40(5): 216-218,223. https://doi.org/10.3969/j.issn.1000-3428.2014.05.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the dynamic gesture recognition field, the Dynamic Time Warping(DTW) algorithm, which has advantage in eliminating time differences between different space-time expression modes, is a template matching algorithm in essence, so its performance is limited by the capacity of the sample database and lacking statistical model framework to train. Its recognition result is not satisfactory and stability is poor, especially in the cases of large amount of data, complex gestures and combined gestures. In response to these deficiencies, this paper proposes a gesture recognition algorithm based on DTW and Combined Discriminative Feature Detector(CDFD). It warps gesture signals in the time domain only, uses combined discriminative feature detectors to transform probability distribution of gesture features to binary piecewise linear function and makes zero or one according to the permissible deviation ranges, finally classifies gestures. Experimental results show that this algorithm can discard non-discriminative features to reduce dimensionality and noise, and the gesture average recognition rate reaches 91.2%. Compared with individual DTW algorithm, gesture recognition rate increases by 6.0%.
  • WANG Si-ming, ZHAO Wei
    Computer Engineering. 2014, 40(5): 219-223. https://doi.org/10.3969/j.issn.1000-3428.2014.05.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The background modeling algorithm based on Gaussian Mixture Models(GMM) is used widely in moving objects detection, but it can not accurately detect moving objects in some video sequences that have rapid changes of light. Moreover, in the initialization of GMM parameters, the result of object detection contains the moving objects of the initialization image and leads to error detection if the initialization image has moving objects. In allusion to the problems mentioned above, a GMM algorithm based on the intensity feature autocorrelation is proposed. The brightness feature autocorrelation parameters are used to identify whether there is a moving object in the initialization image, the fit value of intensity feature autocorrelation parameters is used to identify that there is a fast illumination variation or not in the current frame, and the object detection is made by using the ideas of GMM and intensity difference. The video taken actually is simulated by using the proposed algorithm that is of high accuracy and of high real-time, and results show that a moving object is extracted well from video sequences that have rapid changes of light under the disturbed condition that the initialization image of GMM has moving objects.
  • SU Fu-hua, LIU Yun-lian, WU Tie-bin
    Computer Engineering. 2014, 40(5): 224-227,233. https://doi.org/10.3969/j.issn.1000-3428.2014.05.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cuckoo Search(CS) algorithm is proposed as a population-based optimization algorithm and it is so far successfully applied in a variety of fields. A modified CS algorithm is proposed for solving unconstrained optimization problems. Chaos sequence and dynamic random local search technique are introduced to enhance the optimization ability and to improve the convergence speed of CS algorithm. Through testing the performance of the proposed algorithm on a set of 4 benchmark functions and comparing with other six algorithms, simulation result shows that the proposed algorithm has great ability of global search and better convergence rate.
  • YAO Ming-hai, WANG Na, YI Yu-gen, LUAN Jing-zhao
    Computer Engineering. 2014, 40(5): 228-233. https://doi.org/10.3969/j.issn.1000-3428.2014.05.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional dimensionality reduction methods only pay attention to the local similarity information of images. They neglect the diversity information of images and spatial structure of the pixels in the images. Therefore, a new supervised dimensionality reduction method is proposed, which constructs the local similarity graph and local diversity graph to characterize the local structure of images. Furthermore, a 2D Discretized Laplacian Smooth regularization by exploiting the spatial structure of the pixels in the images is introduced into the objective function. The method effectively maintains the local structure information between images and maintains the diversity information between images and spatial structure of the pixels in the images. It can effectively extract out the low dimensional feature from the face image. The method is verified on the Yale and ORL database, and experimental results show that the method has high recognition accuracy.
  • ZHANG Shu-zhen
    Computer Engineering. 2014, 40(5): 234-237,242. https://doi.org/10.3969/j.issn.1000-3428.2014.05.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of inaccurating image segmentation caused by image noise and the common threshold selection methods which only rely on the probabilistic information from the image histogram is without directly thinking of the uniformity of the image inter-class gray distribution, a threshold selection algorithm based on a three-dimensional histogram correction and gray entropy de- composition is proposed. It analyzes the influence of image noise to the gray of pixel’s neighborhood region, and reduces the noise interference by modifying the three-dimensional histogram. A formula of threshold selection based on three-dimensional gray entropy is presented, and the dimension of gray entropy is decomposed to one dimension, which makes the computation complexity reduced from O(L3) to O(L). Experimental results show that, compared with two-dimensional maximum entropy algorithm based on oblique segmen- tation, two-dimensional cross entropy algorithm based on recursion and three-dimensional Otsu algorithm based on dimension reduction, the presented algorithm has better anti-noise performance, visual quality and the operation time is reduced by about 10% at least.
  • HU Chun-ling, HU Xue-gang, LV Gang
    Computer Engineering. 2014, 40(5): 238-242. https://doi.org/10.3969/j.issn.1000-3428.2014.05.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to slow convergence speed of stochastic sampling algorithm, based on uniform sampler and independent sampler, by improving convergence speed from the initial sample, sampling method and proposal distribution, a hybrid markov chain Monte Carlo sampling algorithm(HSMHS) is put forward in this paper. Based on mutual information between network nodes, it generates initial samples of network structure. In iteration sampling phase, according to certain probability distribution, it randomly selects uniform sampler or independent sampler, and computs proposal distribution of independent sampler based on the current samples to improve the mixing of sampling process. It can be proved that sampling process of HSMHS converges to the posterior probability of network structure, and the algorithm has a good learning accuracy. Experimental results on standard data set also verify that both learning efficiency and precision of HSMHS outperform classical algorithms MHS, PopMCMC and Order-MCMC.
  • MA Si-chao, LIU Xin, YE De-jian
    Computer Engineering. 2014, 40(5): 243-246,251. https://doi.org/10.3969/j.issn.1000-3428.2014.05.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to make the Quality of Experience(QoE) metrics of Internet Protocol Television(IPTV) services more accurately approach real user experience and considering heterogeneity of different terminals, this paper proposes design architecture of streaming media player embedded probe system, and gives the implementation of real-time data monitoring and related indicators calculation inside the player. This system can be integrated in a variety of platform through plug-in library, and uses event-driven model to eliminate the decoding delay and performance problems caused by QoE computing tasks. Test results show that the final QoE metrics is closer to the ultimate user’s experience in the time of unexpected delay and I frame packet loss while the probe system itself has low resources cost of only 5 % increase and can be easily ported and extended.
  • WANG Yong, ZHANG Lian-hai
    Computer Engineering. 2014, 40(5): 247-251. https://doi.org/10.3969/j.issn.1000-3428.2014.05.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a keyword detection method based on word level Discriminative Point Process Model(DPPM) in continuous speech. It computes frame-level phone posterior probability using temporal pattern and multilayer perception. DPPM sees point process produced by phone events of the duration as a whole. Then input Support Vector Machine(SVM) with super vector formed by segmenting and jointing the point process representation, so can distinguish whether the point process is produced by the keyword. Due to long range context dependencies, it is reasonable to expect that directly modeling entire words may permit a more accurate and robust decoding of the speech signal. Experimental results show that for speech, the average recall and precision rate of keywords are above 71.5% and 84.6%, and improves significantly with language model.
  • MA Ying-jie, XIE Wei-kai, SHEN Rui-min
    Computer Engineering. 2014, 40(5): 252-256. https://doi.org/10.3969/j.issn.1000-3428.2014.05.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the quick development of modern distance education, screen coding for encoding screen content appears, a new rate control method is proposed for the characteristic of screen content that includes videos, text and pictures. It is based on the Video Buffer Verifier(VBV) and Constant Rate Factor(CRF) mode and divides the screen content into video area and non-video area using a video recognition algorithm which can recognize the position of the video window on the screen. Combining with the video recognition algorithm, the new rate control method respectively improves rate control method of the frame layer and the macro block layer through following method. For frame layer, if the current frame is a P-frame, and the P-frame is within a certain range around the I-frame, this frame lowers the frame rate of non-video area by adopting P_SKIP into the non-video area to enhance quality of the nearest I-frame so that improves the overall visual effect of the screen sequence. For macro block layer, the new rate control method adjusts the value of Quantization Parameter(QP) and limits its fluctuation range by the location of the current macro block, whether it is in video area or not. Experimental results prove that the new rate control method can score 40% more than the original VBV+CRF mode in the subjective assessment.
  • XIONG Xiong, LIU Xin, YE De-jian
    Computer Engineering. 2014, 40(5): 257-261. https://doi.org/10.3969/j.issn.1000-3428.2014.05.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the rapid development of information technology, the Quality of Service(QoS) of the network must be better. But it is lack of a well-established service quality monitoring system to assess the quality of the IPTV network. The data packet’s capture and analysis for the QoS of the network monitoring is essential, and existing network monitoring system based on the decoder data flow which makes the assessment inaccurate. Set-top boxes, as the carrier of IPTV, whose memory and CPU performance is far less than the PC, the packet capture software on the PC cannot run on the set-top box. Based on the quality of experience, this paper designs set capture program running on the set-top box, uses the state machine to describe the logic between the data and the control information, and classifies data packets for IPTV on control information to calculate the packet loss rate, MDI, the request response time and other parameters for QoS of monitoring system. Compared with the traditional monitor system, the result proves the monitor system has accuracy.
  • ZHANG Zhen, ZHAO Qing-wei, YAN Yong-hong
    Computer Engineering. 2014, 40(5): 262-265. https://doi.org/10.3969/j.issn.1000-3428.2014.05.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes the unsupervised method based on both speech recognition system and feature-based system to search for the speech patterns. In speech recognition system, the alternative results of the speech recognition system decoder are used to search audio patterns with segmental dynamic time warping algorithm. Then graph clustering algorithm is used, as well as confidence estimation algorithm, to improve the performance of the system. It also proposes the system based on feature only without any knowledge resource. In the final, the performances of the two systems on both radio and television news and spoken dialogue sets are compared. The speech recognition system achieves better performance, and the feature based system can be used on many languages.
  • JIN Guo-ping, YU Zong-qiao, GUO Yan-wen, JIANG He
    Computer Engineering. 2014, 40(5): 266-269. https://doi.org/10.3969/j.issn.1000-3428.2014.05.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As digital audio has a feature of great data volume, traditional audio retrieval method results are used in intolerable response time. In order to speed up audio retrieval, this paper proposes a GPU acceleration audio retrieval method. The audio is divided into multiple short audio segments based on the features, and the characteristic matrix is constituted by the eigenvalues which is calculated from each short audio segment using the GPU acceleration algorithm. The suffix array deformation algorithm is used to find the common set from the two eigenvalues sequence. The common set is refined and overall matched to get the retrieval result. Experimental results show that the retrieval accuracy is over 95% and compared with existing algorithms, this method can significantly improve the retrieval speed and speedup can be achieved in more than 10 times.
  • LEI Hai-jun, WEI Xiong, YANG Zhang, YUAN Mei-leng
    Computer Engineering. 2014, 40(5): 270-273. https://doi.org/10.3969/j.issn.1000-3428.2014.05.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    High Efficiency Video Coding(HEVC) is a kind of new standard of video coding developed by the Joint Collaborative Team on Video Coding(JCT-VC). For the problem of high computational complexity of intra prediction in HEVC, a fast intra prediction algorithm based on edge direction detection is proposed. The proposed method divides 35 intra prediction modes into 5 sets of candidate modes according to 5 basic directions, and each set contains 11 intra prediction modes. The edge direction strength of Prediction Unit(PU) in 5 directions are calculated. The proportion of each direction is calculated and the candidate modes are selected to set corresponding edge direction with the largest proportion as best candidate modes for PU to reduce computational complexity effectively. Experimental results show that the proposed method saves encoding time of 15% and 18% on average compared with HM8.0 in high efficiency and low complexity.
  • XIAN Xiao-dong, JIANG Peng, TANG Yun-jian, YUAN Yu-peng
    Computer Engineering. 2014, 40(5): 274-278,284. https://doi.org/10.3969/j.issn.1000-3428.2014.05.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to domestic current situation that electronic bus-stop board is lack of revealing the crowdedness degree, a new graphical one is designed in this paper. A new method on revealing the crowdedness degree based on ultrasonic test technology is proposed. This paper designs an intelligent public bus vehicle equipment combining with Global Positioning System(GPS) and general packet radio service technique. The ultrasonic sensors detect whether there is anyone standing on by ultrasonic test. MCU counts all the sensors which detect the passengers and estimate the crowdedness degree. It develops a graphical electronic bus-stop board. The system implements some functions, such as revealing the crowdedness degree, vehicle tracking, the bus arrival distance prediction, etc. The system carries out many experiments. The results prove that the system is with high reliability, high accuracy of revealing the crowdedness degree and it visually displays the bus location and the arrival distance in the bus-stop board. It meets requirements of Intelligent Transport System(ITS).
  • GUO Ze-hua, DUAN Zhe-min
    Computer Engineering. 2014, 40(5): 279-284. https://doi.org/10.3969/j.issn.1000-3428.2014.05.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Cutting the electricity cost of geographically distributed data centers is a hot topic in the area of cloud computer. Each of current workload dispatching mechanism only considers one factor that impacts the electricity cost of geographically distributed datacenters and thus cannot achieve the global minimization. This paper proposes a smart workload dispatching mechanism, Joint Electricity Price-aware, Cooling Efficiency-aware, and Dynamic Frequency Scaling-aware Datacenter Load Balancing(JECF), to cut the electricity cost of distributed Internet datacenters. JECF jointly considers the time-variant electricity prices among datacenters, the efficiency of the cooling system, and the dynamic frequency of active servers in each datacenter, and thus, reduces the total electricity cost of distributed datacenters by trading off the electricity cost consumed by active servers and cooling systems. The evaluation results show JECF outperforms existing datacenter workload dispatching mechanism and achieves significant reduction on the electricity cost of distributed Internet datacenters.
  • LU Ya-nan, LU Heng-ya, PAN Hong-bing, LI Li, HE Shu-zhuan, SHA Jin, LI Wei
    Computer Engineering. 2014, 40(5): 285-288,294. https://doi.org/10.3969/j.issn.1000-3428.2014.05.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Back Projection(BP) algorithm is a radar imaging algorithm based on time-domain processing. Aiming at the low computing efficiency and slow processing speed of BP algorithm, this paper proposes a parallel method after analyzing its principle and operation process. It analyzes the parallelization feasibility of the BP algorithm, and designs a dedicated BP operation module based on Field Programmable Gate Array(FPGA). To achieve the parallelization of the algorithm water treatment within a module and parallel processing between modules are adopted to speed up computations. It consumes 139 s to complete the computing of 2 048×4 096 target grid points using this method. The average time for a single-point is 3 times faster than the method based on GPU, and the imaging quality is as good as the results of computer imaging. Experimental result shows that the parallel method can effectively improve the operation efficiency of the BP algorithm.
  • WANG Liang, FU Fang-fa, LIU Zhao-chi, LAI Feng-chang
    Computer Engineering. 2014, 40(5): 289-294. https://doi.org/10.3969/j.issn.1000-3428.2014.05.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to decrease task migration overhead in distributed multi-core system, a low-cost task migration scheme is implemented on the distributed multi-core system based on Network on Chip(NoC). The task migration scheme depends on the distributed multi-core system message passing interface, in which program is independent of task mapping. Task is remapped by updating task mapping table. The task state including task stack and task control block in μC/OS-II operating system is transferred to another node, on which the migrated task restores execution. The task migration scheme needs not transfer task code, and task state saving does not use checkpoints. Experimental results show that in this migration scheme, the task migration scheme has little influence on task execution and immediate response to migration request. Therefore, the task migration scheme is low cost and can meet real-time requirements in system.
  • CAI Fang, SHEN Yi, NAN Kai
    Computer Engineering. 2014, 40(5): 295-298. https://doi.org/10.3969/j.issn.1000-3428.2014.05.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Duckling Document Library(DDL) is a tool for document collaboration and management among research teams. It provides a cooperation platform for virtual teams. Tag system is used to manage all the documents on it. During the use of the library, the number of documents without any tags is gradually accumulating and the quality of tags labeled by users to some documents is not so good. All these troubles impede the effective control of the documents. In order to solve these problems, this paper proposes a tag recommendation method suitable for the document library of research online platform, which includes collaboration filtering recommendation and keywords extraction recommendation, in this way users are prompted to add qualified tags and improve the efficiency of the document library. Precision and recall rate metrics are used in the collaboration filtering recommendation and user survey in the keywords extraction recommendation. Experimental results show that a recommended list of three tags can get desired effect. In production environment, this tag recommendation system has qualified accuracy, reliability and is easy to be implemented.
  • LIAO Hai-tao, SHI Zheng, ZHANG Teng
    Computer Engineering. 2014, 40(5): 299-303. https://doi.org/10.3969/j.issn.1000-3428.2014.05.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Global wiring is a very important step in the Very Large Scale Integrated(VLSI) circuits design. Classic maze routing algorithm and its improved versions are widely used to deal with global routing problems in the industrial sector. With the decreasing process node, the shortcoming of the high complexity of the maze routing algorithm becomes increasingly evident. By means of a new concept boundary expansion, this paper presents a new point-to-point wiring path search algorithm to solve the high complexity problem of rapidly increase with the expansion of the scale of routing. With the definition of free node, the new algorithm abandons the inefficient node by node expansion method. Instead, this algorithm expands the boundary and finds new free nodes and will not terminate until find out a path or determine that no solution is available. The theoretical and experimental comparisons are conducted between the proposed algorithm and classic routing algorithms. Experimental results show that the proposed algorithm can complete the routing with the runtime of 7%~14% of the classic algorithm in most cases.
  • ZHA Qi-wen, ZHANG Wu, ZENG Xue-wen, SONG Yi
    Computer Engineering. 2014, 40(5): 304-308. https://doi.org/10.3969/j.issn.1000-3428.2014.05.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the performance and scalability problems of the existing implementation of Internet Small Computer System Interface(iSCSI) initiator, this paper studies the network processing software framework of multi-core network processor and proposes the multi-core network processor heterogeneous operating system software framework. Based on the proposed multi-core network processor heterogeneous operating system software framework and P-SPL data plane programming model, this paper proposes an implementation of iSCSI initiator. Experimental result proves that the implementation based on the multi-core network processor heterogeneous operating system software framework has better performance on throughput and response time than the implementation based on Linux. In 6 GE ports experiment environment, the new implementation gets a maximum of 180 MB/s read and write throughput improvement and 1.6 ms of response time reduce.
  • YAN Li-ping, CHEN Qing-kui
    Computer Engineering. 2014, 40(5): 309-312. https://doi.org/10.3969/j.issn.1000-3428.2014.05.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is the problem that weak immunity of Printed Circuit Board(PCB) resulted by the low thermal dissipation performance with unreasonable layout and the low noise immunity performance without optimization wiring. To attack the problem, an algorithm based on collaboration between the layout and the wiring is proposed. The algorithm adopts the rule based on Cellular Automata(CA) transfers the position of components according to temperature limitation for layout so that the heat dissipation performance of layout can be more reasonable and the method is proposed based on Ant Colony Algorithm(ACA) looks for the shortest routing length and the minimum number of holes and evaluates PCB noise immunity performance for wiring. Experimental result shows that performance of heat dissipation is increased by 14% on the aspect of heat dissipation for the layout, the average routing length and number of holes are decreased obviously on the aspect of noise immunity for the wiring and the PCB total average temperature is decreased by 14%.
  • CUI Bo, LIU Zhong-jin, LI Yong, SU Li, JIN De-peng, ZENG Lie-guang
    Computer Engineering. 2014, 40(5): 313-316. https://doi.org/10.3969/j.issn.1000-3428.2014.05.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    New protocols and architectures for next generation Internet is an important part of current network research. Experimental verification based on physical equipment is the main approach to examine the feasibility and performance of the new technologies. With concerns of that software and traditional network facility based verification methods have some disadvantages. This paper proposes a device design solution to support network innovation experiment. Based on Field Programmable Gate Array(FPGA), decoupling the data plane, control plane and using high performance network and storage modules, this paper can achieve the goals required by network innovation studies such as reprogrammability, high performance, flexibility of control and management, implements the design on TNIP network processing card. Experimental results show that TNIP can handle up to 16 Gb/s network traffic and can be used to deploy network innovation experiments.
  • LI Bing-long, ZHANG Chuan-fu, HAN Zong-da, WANG Qing-xian
    Computer Engineering. 2014, 40(5): 317-320. https://doi.org/10.3969/j.issn.1000-3428.2014.05.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To acquire fragment E-mail evidence from storage medium, this paper analyzes the E-mail fragment file carving problem on the base of the set partition theory, determines the fragment file carving thought. According to the model, it designs E-mail fragment file carving algorithm model including preprocessing, E-mail file fragment subset determination, connected relation determination between E-mail fragments. By using hexadecimal editor, it expounds internal structure features of E-mail file, combined with the characteristics of fragment mail head and tail and embedded html files, discusses the fragment attributes in storage medium, and gives the adjacent rules among concentration characteristics, follow characteristics, linear properties and information characteristics from the fragments. Experimental results show that the algorithm can acquire E-mail evidence more effectively.