Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 January 2014, Volume 40 Issue 1
    

  • Select all
    |
  • WU Su-zhen, CHEN Xiao-lan, MAO Bo
    Computer Engineering. 2014, 40(1): 1-5. https://doi.org/10.3969/j.issn.1000-3428.2014.01.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The response times are linear with the request sizes for flash-based Solid State Disk(SSD) with the same request type. Moreover, the read performance and write performance of flash-based SSD are asymmetric. Based on these characteristics, this paper proposes a Size-based I/O Scheduler(SIOS) for flash-based SSD to improve the I/O performance of SSD-based storage systems from the viewpoint of average response time. SIOS utilizes the asymmetric read and write performance characteristics of flash-based SSD and gives higher priority to the read requests. Moreover, by first processing the small requests in the I/O waiting queue, the average waiting times of the requests are reduced significantly. It implements SIOS in the Linux kernel and evaluates it with two kinds of SSD devices(SLC and MLC)driven by the five traces. Compared with the existing Linux disk I/O schedulers, evaluation results show that SIOS reduces average response times by 18.4%, 25.8%, 14.9%, 14.5% and 13.1% for SLC-based flash SSD, and reduces average response times by 16.9%, 24.4%, 13.1%, 13.0% and 13.7% for MLC-based flash SSD. Results show that compared with the state-of-the-arts, SIOS reduces the average response times significantly. Consequently, the I/O performance of the SSD-based storage systems is improved.

  • TIAN Mei, LIU Xu-jie, ZHU Cui-tao
    Computer Engineering. 2014, 40(1): 6-10. https://doi.org/10.3969/j.issn.1000-3428.2014.01.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    To overcome the shortcomings by the existing wideband spectrum compressed sensing by single cognitive node: low effici- ency and high load in low Signal Noise Ratio(SNR) and deep fading, the algorithm based on reweighted fast alternating direction-n multiplier method for spectrum sensing is proposed. This algorithm can make the update of auxiliary variable simplified through derivation by utilizing convexity of the objective function. As for the update of estimated variables, it makes augmented Lagrangian functions with partial linearization become strictly convex function by linearization of objective function and adding a quadratic term, ultimately solving problems by using iterative soft threshold algorithm. Meanwhile, it adds weight in the target term and suppressing non-zero elements in signal with large weight to get the solution close to minimum ?0 norm. Experimental results show that detection probability and detection speed of the algorithm is improved under the environment of low SNR.

  • LI Wen-long, CHEN Yue, XU Jin-yong, LIANG Tao
    Computer Engineering. 2014, 40(1): 11-14,19. https://doi.org/10.3969/j.issn.1000-3428.2014.01.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    It is limitary to classify the radio devices by utilizing the turn-on transients character, when the turn-on transient signals are captured and extracted. Aiming at this problem, this paper investigates Frequency Hopping(FH) transients character which can be used to classify the bluetooth devices alike the Turn-on Transients(TOT) character but get rid of the limitation which the RFF based TOT character has. The FHT-Character based on the Radio Frequency Fingerprinting(RFF) process is validated, which includes data acquisition, transient detection, radio frequency fingerprint extraction, and classification subsystems. A classification performance of the identification system is evaluated from experimental data. It is demonstrated that the FHT character can be used to classify the bluetooth devices successfully. It further analyzes the implications of device fingerprinting on the security of bluetooth networking protocols which is illustrated by the example of the detection and combat for wormhole attacks. From this, a safe link management protocol is given which is based on bluetooth fingerprinting.

  • ZHANG Sheng, SHI Rong-hua, ZHOU Fang-fang
    Computer Engineering. 2014, 40(1): 15-19. https://doi.org/10.3969/j.issn.1000-3428.2014.01.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    There are some security problem of cognition difficulty, lack of global cognition and interaction in modern Internet security. How to identify network attacks and abnormal events in a quicker and more effective way is a key and eternal topic. The visualization method, a possible and valuable solution, is proposed. Considering the features and defeats of current working visualization systems, this paper researches and constructs a new type of Intrusion Detection System(IDS)——IDS View, a system based on radial panel visualization technology. With a main focus on user interface and experience, decrease of image occlusion, color mixing algorithms, curve algorithms and port mapping algorithms, this system can well be applied to the campus network security situation assessment. Application results show that analysts can intuitively be aware of the network security status from both macro and micro levels, so it can effectively identify network attacks and assist them in decision-making.

  • LU Wen-zhe, YANG Feng-lei, GAO Ning, MAO Wei
    Computer Engineering. 2014, 40(1): 20-24,30. https://doi.org/10.3969/j.issn.1000-3428.2014.01.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    High reliability is an important characteristic of Next Generation Internet(NGI). The Internet is on the way to trusted network. The trusted network includes the trust of service providers, the trust of the network information transmission and the trust of end-users. The identity trust of service providers is the important base of the trusted network. Aiming at this problem, this paper describes a technical architecture of a website trusted service and explains the Domain Name Server(DNS)-based check protocol in details. By the check service, users can get the Internet service provider’s information conveniently from Internet applications. Experimental results show that this architecture and the related check protocol can meet the needs of the practical application in terms of efficiency, usability, scalability, etc. The check performance of one machine can reach 150 000 times per second.

  • WANG Jian, WU Yu, LIN Hong-fei, YANG Zhi-hao
    Computer Engineering. 2014, 40(1): 25-30. https://doi.org/10.3969/j.issn.1000-3428.2014.01.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Due to the simplistic and shallow application mode, syntactic information can not effectively play a role in the trigger recognition phase of traditional biological event extraction methods based on semantic and syntactic information. This paper describes a trigger extraction method based on the deep syntactic analysis. To make more effective utilization of the deep syntactic information, a unique indirect application mode is adopted. Deep syntactic information is used for edge detection, and the result is merged into the trigger extraction phase. Experimental results on BioNLP 2009 and 2011 shared tasks data achieve F-scores of 68.8% and 67.3%, which shows that the method has a good performance on biomedical event trigger extraction.

  • LIANG Dong, ZANG Dong-song, SUN Gong-xing, Valentin Kuznetsov
    Computer Engineering. 2014, 40(1): 31-38. https://doi.org/10.3969/j.issn.1000-3428.2014.01.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Under complex data environment of Compact Muon Solenoid(CMS) experiment on the Large Hadron Collider(LHC), there are a number of relational data sources providing organization and distribution information for indexing the complex CMS data. To provide accurate keywords query function for data query system, this paper presents a keywords query system which can support different databases. By analyzing the database schema graph, this system can dynamically translate keywords Query Language(QL) into Structured Query Language(SQL) language. During this translation, the key issue is how to solve the ambiguity problem, therefore two algorithms are provided: a schema graph analysis algorithm based on query entities and a dynamic join algorithm based on a minimal weight tree generation. Experimental result shows that the dynamic join algorithm can calculate the connection mode of the database table for keywords query, make the keywords query system have high query efficiency, and meet the needs of users in real time, accurate query.
  • CHENG Xiao-lin, XIONG Yan, LIU Qing-wen, LU Qi-wei
    Computer Engineering. 2014, 40(1): 39-44. https://doi.org/10.3969/j.issn.1000-3428.2014.01.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of data sparsity and dataset heterogeneity in memory-based collaborative filtering recommendation system, this paper proposes a collaborative filtering method based on variable weight similarity computation and Adaptive Local Fusion- parameter(ALFP). The method extracts user emotion information of user-item rating by counting data set to compute user similarity, meanwhile, according to user-item rating quality to improve item similarity computation method. The method then gets ALFP to enhance collaborative filtering’s adaptability to dataset by forecast confidence of user-based method and item-based method. Experimental results show that the method outperforms traditional Global Fusion-parameter(GFP) method by 0.02 with Mean Absolute Error(MAE) in case of data sparsity, it has higher recommendation precision and recommendation coverage, and effectively solves the problem of data sparseness and heterogeneous data sets.
  • SUN Tao, SUN Hong-feng, CHEN Wei-heng, LIANG Sai-ting
    Computer Engineering. 2014, 40(1): 45-48. https://doi.org/10.3969/j.issn.1000-3428.2014.01.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Skyline query is used in many applications in fields as diverse as multi-objective decision, data mining and so on. Previous studies mainly focus on the static dataset. There are a few exceptions however are all aiming at uncertain dataset of discrete values. This paper introduces a new form of multi-dimensional dataset whose attributes are also uncertain but are based on normal distribution. It also proposes an algorithm which can process the Skyline query to such datasets with the help of indexing and dividing and conquering. Experimental results show this approach can efficiently perform skyline query on this type of data.
  • ZHANG Yia, LV Xiu-qinb
    Computer Engineering. 2014, 40(1): 49-54. https://doi.org/10.3969/j.issn.1000-3428.2014.01.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For fast rendering of large scale point cloud, balanced octree storage structure is proposed based on part-memory access mechanism and node points limit as leaf nodes forming condition. The rendering process in-core and out-of-core is designed, including node visible judgment, data scheduling and point cloud drawing. In order to improve the efficiency of visibility judgment, node visualization radius is proposed on the basis of distance and angle constraints between viewpoint and node. Experiments are done with measured large-scale point cloud data. It concludes that the technical approach in this paper is able to smoothly render one hundred million point cloud from global to local with a smaller memory consumption in limited memory resources.
  • CHEN Tian-hao, SHUAI Jian-mei, ZHU Ming
    Computer Engineering. 2014, 40(1): 55-58,62. https://doi.org/10.3969/j.issn.1000-3428.2014.01.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Users looking for a favorite video in vast amounts of network resources often need frequent operating, and personalized recommendation service can be an effective solution to this problem. Against the current lower recommendation accuracy, this paper presents an improved recommendation method based on collaborative filtering. It determines a movies subset that is recommended according to the past records of similar users namely neighbors set. Then it mines the preferences of current user, establishes the interest model of current user, and matches with the movies to recommend. Recommendation is in accordance with the level of matching degree. Afterwards, it classifies the film sets that are recommended to adapt to multi-user viewing in families. Additionally, it recommends similar films in the system early running to solve the cold-start problem in a certain degree. Experimental results show that the improved recommended method has distinct higher recommendation accuracy than the existing collaborative filtering algorithm.
  • WU Kai-jun, LU Huai-wei
    Computer Engineering. 2014, 40(1): 59-62. https://doi.org/10.3969/j.issn.1000-3428.2014.01.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of cloud computing task scheduling, this paper combines the characteristics of population individual cooperation and information sharing of Particle Swarm Optimization(PSO), and proposes a task scheduling algorithm based on Discrete Particle Swarm Optimization(DPSO). In the algorithm, randomization method is used to generate the initial population, time-varying mode is used to adjust the inertia weight. During the location updating, the mapping of the rounded remainder of absolute value method is legalized to improve the discretization of PSO. The cloud computing simulation platform CloudSim is built and recompiled, the experimental results of iterations of 200 times show that DPSO, PSO and GA algorithm are respectively optimized to 457.69 s, 467.90 s and 472.41 s, so to prove that the DPSO algorithm can effectively solve the problem of task scheduling under cloud environment, and the algorithm is better than PSO and GA algorithm in convergence speed.
  • WANG You-zhao, HUANG Dong
    Computer Engineering. 2014, 40(1): 63-67,71. https://doi.org/10.3969/j.issn.1000-3428.2014.01.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the on-line microcomputer anti-misoperation system string matching characteristic and BM algorithm insufficiency, this paper analyzes and puts forward an improved BM algorithm——WBM algorithm in order to further shorten the matching time. It removes the good suffix rule, does the appropriate improvement of bad character, and constructs structure environment of network data for system maintenance framework. The WBM algorithm is applied to the frame network, based on the algorithm of microcomputer anti-misoperation system software. Experimental results prove that, WBM algorithm is the fastest in the contrast of four kinds of algorithm, such as BM, WBM, BMH, QS, and in the same hardware test, CPU is used in 0.76%. The application of the algorithm can shorten the search time to 3.9 s, improve search accuracy to 99.5%. It obviously improves the knowledge in computer search efficiency, and by integrated with framework of network, it can further saves microcomputer anti-misoperation system maintenance time.
  • GUO Ming-kun, CHAI Zhi-lei
    Computer Engineering. 2014, 40(1): 68-71. https://doi.org/10.3969/j.issn.1000-3428.2014.01.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Java language and Java processor get noticed in the real-time embedded system. The traditional method call mechanism of Java Virtual Machine(JVM) using dynamic loading and post-analysis makes Worst-case Execution Time(WCET) difficult to predict. A scheme named advance analysis-micro program execution is put forward to solve the problem. Advance analysis turns the symbolic reference of Java to direct call. The micro program limits the execution time to foreseeable clocks, by running on hardware processor. The improved mechanism is proved WCET predictable by its linear statistics of running time .
  • DAI Fei, LI Tong, XIE Zhong-wen, QIN Jiang-long, LIU Jin-zhuo, QIAN Ye
    Computer Engineering. 2014, 40(1): 72-77,82. https://doi.org/10.3969/j.issn.1000-3428.2014.01.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the quality and efficiency of software evolution and shorten the time of software evolution, it is a necessity to research the property soundness of the software processes which the corresponding software is evolving to ensure the correctness of software evolution processes. According to the process level definition of the software Evolution Process Meta-model(EPMM), the property soundness which is defined to ensure software evolution processes should meet the dynamic properties during software process enactment. Moreover, the corresponding property soundness check algorithms are designed based on the reachability graph of Petri nets. Application result shows that checking the property soundness is used to ensure that software processes logic does not occur exceptions, and it meets the correct requirements from the view of process definition property.
  • CAO Xiao, LI Ying
    Computer Engineering. 2014, 40(1): 78-82. https://doi.org/10.3969/j.issn.1000-3428.2014.01.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Low efficiency of on-card applet execution is a bottleneck restricting the development of the Java Card, so this paper researches the operating principle of Java Card Virtual Machine(JCVM), and proposes a feedback-based JCVM instruction prescheduling scheme to optimize the executable architecture of Java Card. It designs a concept of Weighted Control Flow Graph(WCFG) by collecting run command flow statistical information of feedback applications, and then a code arrangement technology based on WCFG is proposed to realize the pre-scheduling of interpreter. In the target system architecture, it reorders the hot instruction handler functions according to the statistical information of feedback applications. Experimental results show that after optimization of the scheme, the efficiency of the interpreter increases 15.29%, and it does not rely on additional system resources, so it is helpful to the optimization of embedded device based on interpreter architecture and which is resource-constrained.
  • XU Li, SHI Shao-bo
    Computer Engineering. 2014, 40(1): 83-87,97. https://doi.org/10.3969/j.issn.1000-3428.2014.01.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the synchronous data flow feature of Software-defined Radio(SDR) application, this paper proposes a task scheduling and allocation algorithm for asymmetric multi-core SDR. The algorithm comprehensively considering communication time and task fixed pipeline between tasks, and ensures versatility and parallelism of task scheduling and allocation. It models the task scheduling and allocation with Integer Linear Programming(ILP) method, and further improves the execution efficiency task scheduling and allocation by using the task split method to optimize scheduling and allocation results. The experiment of targeted SDR platform for IEEE 802.11a frequency offset estimation shows that the proposed algorithm can improve the SDR throughput by 5.97%, the processor core utilization by 3.03%, and reduce longest leisure waiting time of processor core by 34.31%.
  • WANG Long, ZHANG Liang
    Computer Engineering. 2014, 40(1): 88-92,102. https://doi.org/10.3969/j.issn.1000-3428.2014.01.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There exist abundant services on the Internet and numerous automatic composition methods. Composing these services by such automatic methods in the light of SOA certainly servers the next generation software development. However, almost such methods (including those elegant methods rooted in so-called Roman model) heavily rely on the provision of service behaviors, which is definitely deficient in practice as well as in standard(e.g. WSDL from W3C). To fill this gap, this paper develops a mechanism and an interaction protocol to bridge automatic composition methods with services in real world. This paper constructs a collaborative graphical tool to leverage Web Services Description Language(WSDL) files with service behavior annotations, and a client generator based on Axis2 to make generated client behavior-aware. It tests the approach with k-lookahead algorithm against some popular service repositories including Seekda. The results are very promising.
  • LIU Wen-bin, LI Xiang-bao, FU Sha, LIU Hong-bing, WEN Zhi-qiang
    Computer Engineering. 2014, 40(1): 93-97. https://doi.org/10.3969/j.issn.1000-3428.2014.01.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An improved approximation data aggregation scheduling algorithm for Minimum Data Aggregation Latency(MDAL) is presented due to the existing algorithms have a high time latency bound. A Breadth First Search(BFS) tree rooted at the center node is constructed in this algorithm. And then, a Maximal Independent Set(MIS) is found layer by layer and the adjacent dominators have only 2-hop away from each other. A data aggregation scheduling tree rooted at the center node is formed by using some nodes to connect the nodes in MIS. Thus, the node’s data can be scheduled layer by layer according to the data aggregation scheduling tree. For every dominator, it always connects its 2-hop neighboring dominators using minimal connectors. For the common neighboring dominators of two adjacent dominators, they select a dominator which is close to the center node to join the data aggregation scheduling tree so as to send their data to it. Using this method, the latency for the sink collecting all sensors’ data is reduced greatly. Simulation results show that compared with SAS algorithm, Guo’s algorithm and IAS algorithm, this algorithm has lower average latency than previous works and it has a latency bound of 14R+△?10.
  • ZHANG Qi, JIN Yin-cheng, LI Miao, ZHANG Jian-xiong
    Computer Engineering. 2014, 40(1): 98-102. https://doi.org/10.3969/j.issn.1000-3428.2014.01.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Trie tree data structure is flexible to realize and require small storage space, and it is the preferred to realize high speed routing lookup and packet forwarding. In order to meet the design requirements of micro engine of 10 Gb/s line speed in Network Processor(NP), an optimal balance, multilayer storage routing lookup algorithm based on Trie tree is proposed. That is to establish a balanced compression tree structure, then the adjacent multi nodes are compressed to a storage node. It constructs a specific tree structure to reduce the tree search depth, exchanging space for time, improving the efficiency of lookup and packet forwarding. Router lookup algorithm based on Trie tree is implemented in NP design, and the algorithm performance is analyzed. A single micro engine lookup speed is up to 4.4 Mb/s, and it has an advantage of small storage and high update speed.
  • GU Xia-jun, ZHANG Jing, QIAN Dong-jun
    Computer Engineering. 2014, 40(1): 103-106,112. https://doi.org/10.3969/j.issn.1000-3428.2014.01.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the characteristics of random time-varying frequency offsets parameters of multiple antenna channel, this paper establishes the parametric observation model. It depends on time-varying estimation Cramer-Rao Lower Bound(CRLB) calculation principal, derives the CRLB of frequency offsets characteristics parameters under nonlinear observation model and sequential observations. The value of characteristics parameters is unknown, the observation equation is linearized at estimated values by using Taylor series expansion and then it gets the approximate CRLB of frequency offsets. Simulation results show that the method can achieve good asymptotic convergence of varying parameters’ CRLB whether the true value of the parameters are known or not. The two estimation lower bounds can reach steady state with the increment of signal to noise ratio and the number of observation value.
  • LIANG Wei-bo, PENG Jian-hua
    Computer Engineering. 2014, 40(1): 107-112. https://doi.org/10.3969/j.issn.1000-3428.2014.01.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The time delay parameter can not be accurately estimated in Long-term Evolution(LTE) system because of the multipath fading and the non-line-of-sight in urban canyons and indoor environments. In response to these issues, an algorithm of time delay estimation that is based on Multiple-input Multiple-output(MIMO) in LTE system is proposed. It uses the primary synchronization signal which has a good autocorrelation characteristic as the reference signal. Then, in order to reduce the error rate in the signal transmission process, the technique of transmitter diversity and Maximum Ratio Combining(MRC) are utilized to obtain the maximum SNR received signal. The operation of cross-correlation between the received signal and reference signal is processed to obtain the time delay estimation. Simulation results show that the time delay estimation in proposed algorithm is improved by about 5 sampling interval compared with the reference algorithm in the error of cumulative distribution probability of 90%.
  • LI Shuai, LI Yong, SU Li, JIN De-peng, ZENG Lie-guang
    Computer Engineering. 2014, 40(1): 113-116,138. https://doi.org/10.3969/j.issn.1000-3428.2014.01.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Tsinghua University Network Innovation Environment(TUNIE) platform provides a strong support to test and validate innovative network architectures. During the process of the experiment platform operation, the time-consuming experiment deployment causes a serious inefficiency to the platform. This paper finds two reasons by analysis and measurement, which are centralized image-pool and serial experiment deployment process. Aiming at these problems, it proposes a parallel and fast experiment deployment scheme. It uses distributed image-pool to reduce image copy time of non-local node, uses paralle deployment process to improve resource utilization on the time dimension. The result of simulation and actual application shows that the parallel and fast deployment scheme can effectively improve the speed of the deployment with reducing the deployment latency by 40% to 89% compared with serial deployment scheme.
  • REN Jian-hua, LI Yuan-cheng, YANG Hong
    Computer Engineering. 2014, 40(1): 117-120,143. https://doi.org/10.3969/j.issn.1000-3428.2014.01.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that blindness at route discovery period in AODVjr algorithm, which leads to the utilization of network energy is low, W-AODVjr is proposed, which is based on route width. It is formed between the source node and the destination node of a cyclic route area, and recognized through the index to find the optimal path width. In terms of a balanced energy consumption, the algorithm minimum residual energy maximum for route through the source node selected path to protect energy with maximize node. The result of NS2 simulation reveals that the W-AODVjr algorithm can effectively guarantee packet transmission success rate and the utilization of network energy increase by 8% and extend the life cycle by approximately 12% compared with AODVjr algorithm. The superiority of the Zigbee network proves the W-AODVjr algorithm is better than the original algorithm.
  • ZHAO Liang, CHEN Shi-ping, LI Zhao-wei
    Computer Engineering. 2014, 40(1): 121-125,148. https://doi.org/10.3969/j.issn.1000-3428.2014.01.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to deliver multiple types of traffic with different requirements in highly resource constrained Wireless Sensor Networks(WSNs), this paper proposes QA-MAC, a QoS-aware and priority based MAC protocol for Wireless Multimedia Sensor Networks(WMSNs). QA-MAC aims to increase the utilization of the channel with effective service differentiation mechanisms, adaptive contention window and dynamic duty cycle according to different priority flows. It is able to ensure the demand of real-time traffic transmission. Simulations show significant improvements in terms of latency and data delivery. Compared with the S-MAC protocol, as the network load increases, the average packet transmission rate can be increased by more than 30% and it can be applied in many fields.
  • WU Da-peng, GONG Chang-he, WANG Ru-yan, WANG Jian
    Computer Engineering. 2014, 40(1): 126-129,152. https://doi.org/10.3969/j.issn.1000-3428.2014.01.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Long Term Evolution(LTE) system carries voice business in the packet-switched domain. The voice business quality of delay and packet loss-sensitivity is difficult to be ensured. Therefore, this paper presents a voice business scheduling mechanism of Queueing Delay Aware(QDA). It determines the users’ scheduling priority according to the queue length, the channel conditions, the queuing delay and business maximum permissible delay, and allocates user resources. Theoretical analysis and simulation results show that the proposed scheduling mechanism can effectively use network resources, reduce the packet loss rate and improve user fairness under the condition that delay and system throughput meet the requirements compared with the VoIP Scheduling Model(VSM) scheduling mechanism.
  • XU Ming-di, YANG Lian-jia
    Computer Engineering. 2014, 40(1): 130-133. https://doi.org/10.3969/j.issn.1000-3428.2014.01.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Trusted Computing Technology(TCT) is an effective way to solve Embedded Real-time Operation System(ERTOS) security. However, the existing TCT is hard to satisfy the properties of real-time and low power consumption directly. Based on VxWorks kernel, this paper puts forward a solution of trusted computing by designing embedded real-time trusted computing module and trusted software stack, which can realize the chain of trust by using integrity measurement certificate and establish the lightweight access control architecture. Experimental results show that the average execution time of commands on trusted platform module saves 65.81% execution time compared with SW-TPM module. Lightweight access control affects the kernel by increasing few execution overhead, which can meet the ERTOS requirements of real-time and low power consumption as a whole.
  • LIU Rong-xiang, LAI Hong, ZHANG Wei
    Computer Engineering. 2014, 40(1): 134-138. https://doi.org/10.3969/j.issn.1000-3428.2014.01.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on privileged arrays in Shamir secret sharing schemes, a novel ideal secret sharing scheme is proposed. By researching the new concepts of admissible tracks, non-admissible tracks and privileged arrays on Shamir secret sharing schemes, this paper analyzes non-threshold Shamir schemes. Furthermore, these concepts are extended to Brickell secret sharing scheme based on vector space. This new scheme solves two questions: the difficulty the construction of function in Brickell scheme, and the algorithm to find privileged arrays of any length if such arrays exist. This scheme, on the basis of Brickell scheme, is linear, which has a low computational cost. Meanwhile, the participants can verify their shares with each other, which provids cheat-proof property of the scheme.
  • XU Shou-kun, WANG Wei, YUE Guang-xue
    Computer Engineering. 2014, 40(1): 139-143. https://doi.org/10.3969/j.issn.1000-3428.2014.01.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the correct rate of the Kohonen neural network model for clustering of network intrusion, this paper combines the Invasive Weed Optimization(IWO) algorithm and the Kohonen neural network, and proposes IWO-Kohonen clustering algorithm. It uses IWO algorithm to optimize the initialized weights of the Kohonen neural network, and trains the Kohonen neural network model to calculate an optimal value. By using IWO algorithm, the search ability of the clustering algorithm is enhanced, which not only improves the correct rate of clustering, but also accelerates the convergence speed of the algorithm. Experimental results show that the proposed algorithm has higher correct rate comparing with fuzzy clustering algorithm and generalized neural network clustering algorithm, and it has higher detection rate and lower false alarm rate comparing with ant clustering algorithm and C-means clustering algorithm.
  • TANG Ya-ling
    Computer Engineering. 2014, 40(1): 144-148. https://doi.org/10.3969/j.issn.1000-3428.2014.01.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the disadvantages of existing transmission mechanisms, this paper proposes a securely transmission mechanism based on certification. A RFID protocol is presented to ensure the correctness of the label in the perception stage, and the securely transmission between the Local Information Server of Things(LIST) and the Remote Information Server of Things(RIST) is accomplished by the Hash calculation and nested encryption operation. Results of temperature and humidity sensor experiment and wireless communications experiment based on the Internet of Things(IoT) show that this method is effective, can ensure the security of information transmission, and its computational overhead and communicational overhead are also lower than the existing MLDL methods.
  • ZHOU Hong-zhi, WANG Dai-mu
    Computer Engineering. 2014, 40(1): 149-152. https://doi.org/10.3969/j.issn.1000-3428.2014.01.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Video steganalysis technique can detect the hiding secret information in video, and provide security for national, governmental and corporate secrets. Videos contain not only spatial information within the image, but also the temporal information between the images. This paper proposes a video steganalysis method with refined identification of the temporal and spatial characteristics. The method defines the features of video in temporal and spatial dimension in detail. Marcov technique is utilized to model intra-block and inter-block process of image and extracts spatial feature. Difference analysis is utilized to model the time changing process of image and extracts temporal characteristics. Temporal and spatial characteristics are input to the Support Vector Machine(SVM) model for training and testing. Actual detection precision of 97.13% for 3 100 test videos show that the proposed method can effectively distinguish stego video and non-stego video.
  • LI Ming-ze, XIANG Yang, ZHANG Wen-hua, LIANG Li
    Computer Engineering. 2014, 40(1): 153-157,166. https://doi.org/10.3969/j.issn.1000-3428.2014.01.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the development of steganalysis, new features extraction algorithms emerges, but up to now there still has not a set of universal features which can be effective for JPEG images steganalysis. To improve the detection accuracy of blind detection, this paper proposes a universal steganalysis algorithm based on multi-domain features. Markov chains and histogram statistics functions are used to capture the correlations of neighboring coefficients or pixels of DCT domain, spatial domain and wavelet domain as original features. In the same way, the calibrated features are extracted from the calibrated image. Combining boosting feature selections with exhaustion ways, the features improve the detection accuracy and reduce the time of classification after dimensionality reduction. Experiments are done for four kinds of typical JPEG steganography schemes in small embedding rate, compared with existed features, experimental results show that the diversity features get higher accuracy than 2%, and wider adaptability.
  • WANG Hui, WEI Shi-min
    Computer Engineering. 2014, 40(1): 158-160. https://doi.org/10.3969/j.issn.1000-3428.2014.01.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    By the trace function from the field GF(2n) to the subfield GF(2m), the cryptographic property of a class of nonlinear spreading sequences is considered, that is the trinomial property of the cascaded No sequences. And the trinomial form of the cascaded No sequences is given. Binary No sequences have the same trinomial property from the derived conclusions, trinomial property problem of binary No sequences is solved. Analysis results show that the cascaded No sequences not only have the regular trinomial pairs, but also have regular trinomial pairs.
  • GU Jia-yun, LIU Jin-fei, CHEN Ming
    Computer Engineering. 2014, 40(1): 161-166. https://doi.org/10.3969/j.issn.1000-3428.2014.01.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A modified prediction method of large size data based on Support Vector Machine(SVM) classification and regression is proposed aiming at the problem that prediction accuracy of SVM regression is not proportional to the size of training sample. The method combines the SVM classification and regression algorithms. The size of the sample data is optimized, and the sample data is classified based on a priori knowledge. According to the classification, the classification model is trained. Then it trains the regression model for training sample of all classes, and makes the prediction with large size data based on SVM classification and regression. With the case of Shanghai Composite Index, the Mean Squared Error(MSE) of values predicted by the new method based on SVM classification and regression is 12.4, lower than 47.8 predicted by Artificial Neural Network(ANN) and much lower than 436.9 predicted by SVM regression. These results verify the effectiveness and feasibility of the method.
  • XU Sheng-qiang, XIA Yi, YAO Zhi-ming, YANG Xian-jun, ZHANG Tao, SUN Yi-ning
    Computer Engineering. 2014, 40(1): 167-171,176. https://doi.org/10.3969/j.issn.1000-3428.2014.01.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the demand of people counting in the entrance of the public places with complicated environment, a people counting system based on the flexible pressure-sensitive sensor is presented in this study. The statistical accuracy of the proposed system is not influenced by illumination change, shelter, motion blur and complex background. The sensor is laid down at the entrance of public places. The plantar pressure data is sampled and transfers to the remote computer for further analysis through the network communication module in real time. By data denoising, image segmentation and feature extraction, the footprints of the pedestrians and their characteristic parameters are obtained. Then the algorithm of feature matching and motion trajectory planning is applied to realize pedestrian counting. Experimental results show that the proposed system not only has the merits of working stably, strong robustness and fast response, but also can provide a high statistical accuracy in all kinds of test conditions, and in some specific case, an accuracy of 98% can be obtained.
  • CHEN Jiu-meia,b
    Computer Engineering. 2014, 40(1): 172-176. https://doi.org/10.3969/j.issn.1000-3428.2014.01.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at Two-echelon Location-routing Problem(2E-LRP) of city logistics distribution system, Artificial Bee Colony(ABC) algorithm is applied to solve it. The selection strategies of this algorithm are extended. Tournament selection strategy with parameter control is put forward based on two common selection strategies, which includes the selection strategy based on fitness and tournament selection strategy. Through simulation on large, medium, small scale examples, results show that ABC algorithm can effectively solve 2E-LRP within reasonable computation time, and ABC algorithm using selection strategy based on fitness has faster speed, using tournament selection strategy has higher quality of the best solution, using tournament selection strategy with a parameter control has better quality of the poor solution and can get the stable solutions.
  • LIU Lu, GAO Qiang, LIU Yan-heng, SUN Xin
    Computer Engineering. 2014, 40(1): 177-180. https://doi.org/10.3969/j.issn.1000-3428.2014.01.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Instance selection is a kind of effective method to remove the noise and redundant data. According to the unbalance between the generalization ability and reduction in existing instance selection methods, this paper proposes a new instance selection method: Redundant Instance Pair Elimination(RIPE) algorithm. It gives the concept of nearest similar pair, calculates the nearest similar pair of datasets, and removes the eligible instances. The simulation experimental results in 11 different datasets show that the classification accuracy and storage compression ratio of processed dataset are obviously improved compared with original datasets. Contrasted with Edited Nearest Neighbor rule(ENN) algorithm, this algorithm can keep the classification accuracy, improve more than 35% in average storage compression ratio, keep intact the data distribution of original datasets, and make better compromise in the classification accuracy and the storage compression ratio.
  • LIU Jun, ZHOU Ming-quan, GENG Guo-hua, LIJI Jun-nan
    Computer Engineering. 2014, 40(1): 181-185,190. https://doi.org/10.3969/j.issn.1000-3428.2014.01.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To meet the virtual restoration requirement of large amounts of fragments in the third excavation of Terra-cotta Warriors and Horses, properties of the fragments are analyzed, and the classification principles of earthen tub fragments are induced. In view of the fragment classification and different thicknesses of the fracture surfaces, a fragments splicing method is proposed, which combines the spatial contour curves matching and the spatial fracture surfaces matching. All of the fragments are classified according to the thickness of sections, and different matching methods are employed to adapt to different types of fragments. The multi-scale integral invariants are used as the feature representation methods for the contour curves and fracture surfaces, and the initial matching point pairs are obtained by the consistency constraint method. The filtering algorithm is used to remove the pseudo matching points, and the geometric hashing algorithm is used to find the optimal matching to get the local and global matching. This method is applied to the restoration of Terra-cotta Warriors and Horses, and the results show that it can get correct matching relationship between fragments and has good robustness.
  • JIANG Ming-zuo, ZHANG Xin-li, WU Tao, WANG Jia-xia
    Computer Engineering. 2014, 40(1): 186-190. https://doi.org/10.3969/j.issn.1000-3428.2014.01.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the phenomenon of premature convergence in traditional genetic algorithm, an adaptive genetic algorithm with chaos and multi-population based on cloud control is proposed. Keeping the balance between global condition and individual difference, crossover rate and mutation rate are adaptively adjusted by the cloud control. An individual measure on punishing the strong and helping the weak is taken when the evolution is normal, while the algorithm is premature convergence, the inferior individuals is performed. Additionally, the proposed algorithm adopts multi-population optimization mechanism to realize the evolution of each population simultaneously. Experimental results show that, compared with the Standard Genetic Algorithm(SGA) and the Adaptive Genetic Algorithm (AGA), the proposed algorithm can effectively avoid the problem of premature convergence, obtain a higher convergence efficiency.
  • GUO Xiao-yan, WANG Lian-guo, DAI Yong-qiang
    Computer Engineering. 2014, 40(1): 191-194,198. https://doi.org/10.3969/j.issn.1000-3428.2014.01.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because reduction of the diversity of solution causes the reduction of search speed and accuracy at the same time in the later period of search according to Traveling Saleman Problem(TSP). A Subsection Shuffled Frog Leaping Algorithm(S-SFLA) is proposed. In the initial period of search, in-over operator is used to reduce cross paths. In the latter period, the neighborhood search(individual neighborhood, local optimal area, the global optimal neighborhood) is introduced to increase the diversity of population. In the whole search process, global optimal solution of history and the local optimal solution of history are remembered to avoid circuit search, and in the process of local update, every frog has the opportunity to be updated. Experimental results show that, compared with genetic algorithm, ant colony algorithm, basic leapfrog algorithm, S-SFLA algorithm has higher precision and search speed in solving TSP problem of medium scale.
  • JIA He-ming, SONG Wen-long, MU Hong-wei, CHE Yan-ting
    Computer Engineering. 2014, 40(1): 195-198. https://doi.org/10.3969/j.issn.1000-3428.2014.01.0041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the alignment time, initial alignment is carried on with large azimuth misalignment, and the nonlinear filtering methods are utilized. Therefore Gaussian Process regression Square Root Central Difference Kalman Filtering(GP-SRCDKF) is proposed, and which is taken Gaussian process regression into SRCDKF algorithm to get system regression model and noise covariance, regression model is taken instead of state equation and observation equation, and the corresponding noise covariance makes real-time adaptive adjustment, which not only overcomes the deficiencies that Extended Kalman Filtering(EKF) has low precision and needs to calculate the Jacobian matrix, but also solves the problems that traditional filter is limited by the uncertain system dynamic model and inaccurate noise covariance. Simulation results verify the effectiveness and superiority of the proposed algorithm.
  • BAO Li-qun, HOU Zhi-wei, LI Xiang-lin
    Computer Engineering. 2014, 40(1): 199-202. https://doi.org/10.3969/j.issn.1000-3428.2014.01.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of the lack of Chinese SMS sample libraries, this paper proposes a client sample characteristics library generation method. It gives the design of sample characteristics database for client SMS spam filtering, and completes text preprocessing and Chinese word segmentation for messages received from the client, considering the low frequency words having a high amount of information and terms with strong category characteristic, it improves mutual information extraction evaluation function, and extracts the sample characteristic and forms the characteristic data. Experiment tests the impact of the number of features on filter performance using the Bayesian algorithm, and results show that the accuracy rate reaches a maximum when the number of features is 10. Experiment also tests the database file size, and when the number of key words reach 2 000, the size of database file is about 714.28 KB. It can run on the ordinary mobile phone platform, and tests show the feasibility of the method.
  • XIAO Jia-lin, ZHAO Yu-qing, WANG Ying
    Computer Engineering. 2014, 40(1): 203-208. https://doi.org/10.3969/j.issn.1000-3428.2014.01.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the construction machinery strong noise environment, there are lots of noise source, speech often is covered by the machine’s noise, calls often can not success and waste bandwidth. To solve this problem, a new Voice Activity Detection(VAD) algorithm based on Hidden Markov Model(HMM) and Support Vector Machine(SVM)(HMM/SVM-VAD) is proposed. This algorithm inputs the Mel Frequency Cepstrum Coefficient(MFCC) into the HMM, and gets the N-best Recognition results by using Viterbi algorithm, and transforms the N-best recognition results to SVM feature vector. It uses the SVM to get the classification results. Experimental results show that HMM/SVM-VAD is better than the traditional statistical algorithm and wavelet SVM algorithm. In the case of machine work noise, new method improves by the average of 9% than the static statistical algorithms, improves 11% than the wavelet SVM algorithm, in the case of cab noise, the new method improves small, but it has faster growth, and improves by 9% than the traditional statistical algorithm.
  • GU Ping, LUO Zhi-heng, OUYANG Yuan-you
    Computer Engineering. 2014, 40(1): 209-212. https://doi.org/10.3969/j.issn.1000-3428.2014.01.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Blocking and evolvement of classifiers are two key issues which affect the performance of hierarchical classification. To solve these problems, this paper introduces a new algorithm that incrementally learns a hierarchical classification tree by extracting appropriate terms from documents for each node of the taxonomy, and classification is obtained by evaluating the confidence of document on each path from root to the leaf category. Considering the characteristic of incremental learning, it incorporates semantic similarity into the confidence estimation of classification path with aim to alleviate the problem of features incompleteness. Experimental results show that compared with hierarchical Bayes and SVM, the algorithm not only has the characteristics of adaptability, but also can improve the classification accuracy by about 8%.
  • GAO Min, GUO Ye-cai
    Computer Engineering. 2014, 40(1): 213-217. https://doi.org/10.3969/j.issn.1000-3428.2014.01.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Multi-Modulus Algorithm(MMA) used to equalize high-order Quadrature Amplitude Modulation(QAM) has many disadvantages, such as slow convergence rate, large mean square error, and easily immerging in partial minimum. In order to overcome the problems, orthogonal Wavelet Transform Multi-modulus blind Equalization Algorithm based on Optimization of Chaos Glowworm Swarm Optimization(CGSO-WT-MMA) is proposed. In the proposed algorithm, MMA is integrated with CGSO and WT, the de-correlation ability of WT is used to reduce the signal autocorrelation, and the global search ability of GSO algorithm integrating with the local search ability of chaos algorithm is used to optimize the equalizer weight vector. Simulation experimental results show that compared with MMA algorithm, mean square error of the algorithm decreases 4 dB, convergence rate speeds up 5 000 step, and its steady state performance has obvious improvement.
  • ZHANG Lu-lu, CHEN Yao-wu, JIANG Rong-xin
    Computer Engineering. 2014, 40(1): 218-221,227. https://doi.org/10.3969/j.issn.1000-3428.2014.01.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the requirements of high real-time and high accuracy for abnormal sound detection in intelligent surveillance front-end system, this paper presents a scheme of abnormal sounds detection based on mixed characteristic parameters and improved Dynamic Time Warping(DTW) algorithm. This system detects endpoints of sounds based on short-time magnitude and short-time threshold-crossing rate, extracts mixed characteristic parameters including short-time magnitude, Mel Frequency Cestrum Coefficient (MFCC) and difference coefficient. It recognizes sounds by improved DTW algorithm. Experimental results on the TI TMS320DM368 processor platform show that the recognition time of intelligent surveillance front-end system based on the proposed scheme is less than 1 s and average recognition rate is 89.3%.
  • JIANG Hua, HAN An-qi, WANG Mei-jia, WANG Zheng, WU Yun-ling
    Computer Engineering. 2014, 40(1): 222-227. https://doi.org/10.3969/j.issn.1000-3428.2014.01.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When calculating the similarity of strings, the Levenshtein Distance(LD) algorithm only considers the operating times and ignores the common substrings of two strings. Aiming at this problem, an improved Levenshtein distance algorithm is proposed to calculate the similarity. The new algorithm improves the formula of similarity and the Levenshtein matrix. When calculating the distance, the new algorithm calculates the longest common substring and all the LD backtracking paths in the original matrix at the same time. Selecting a word in the experiment as a source string, a set of similar words of the different degrees of the source string as a target string, the new similarity measure formula is compared with the existing string similarity calculation method, the new formula reduces the number of target strings into the winner table with similarity sample range and standard deviation of 0.331 and 0.150, respectively. Experimental results show that the new algorithm has higher accuracy and more flexible searching way in the same space complexity.
  • YU Xu-yong, WANG Zhi-jie
    Computer Engineering. 2014, 40(1): 228-231,235. https://doi.org/10.3969/j.issn.1000-3428.2014.01.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to achieve the automatic tracking of vehicles in road traffic, this paper proposes a mean shift tracking algorithm based on gray triggering. It detects the targets using a modified Gaussian mixture model to reduce the influences of the target detection caused by sudden changes of light. Thereby it guarantees the minimum gray interference on triggered areas. This algorithm designs a trigger method based on changes of the gray degree in virtual region. By capturing the local peak gray value in the triggered area, this method expends target search area to lock the moving vehicles and thereby achieves the Mean Shift tracking in which the kernel function adjusts its width automatically. Experimental result shows that this method achieves automatic trigger tracking efficiently and accurately, has high trigger accuracy and good practical value.
  • LIU Jun-qing, MA Lei, XIANG Yan, YI San-li, CHEN Hong-lei, ZHANG Qian, HE Jian-feng
    Computer Engineering. 2014, 40(1): 232-235. https://doi.org/10.3969/j.issn.1000-3428.2014.01.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Currently, some image quality assessment methods do not consider the impact of the Region of Interest(ROI) of the image quality, there is deviation with human subjective visual quality. In this paper, a strategy is proposed based on ROI and dual-scale edge structure similarity. The quality assessment of the image is a weighted combination of ROI and NON-ROI, the dual-scale edge structure similarity is used in ROI, and the classical structure similarity is applied in NON-ROI. Experimental results show that, considering the influence of ROI to the image, strengthening the evaluation value of the ROI in the proportion of the overall rating value, the model is more consistent with human’s subjective visual quality.
  • TONG Wei, ZHAO Xu-dong, WANG Shi-lin, LI Sheng-hong
    Computer Engineering. 2014, 40(1): 236-238,245. https://doi.org/10.3969/j.issn.1000-3428.2014.01.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    With the popularity of graphic editing software, tampering a digital image becomes more and more easier, so it is urgent to solve digital image forensics problem. Aiming at the problem, this paper proposes an image splicing detection method based on image information entropy feature and multi-step Markov feature. Image splicing detection can be treated as a two-class pattern recognition problem. This method consists of entropy feature extracted from the original image, three-level Haar Discrete Wavelet Transform(DWT) and multiple-size block Discrete Cosine Transform(DCT), and multi-step Markov feature transition probability matrix is extracted from block DCT. The statistical characteristics consist of information entropy and multi-step Markov feature. Support Vector Machine(SVM) is used to judge the image category and get judgment result. Experimental results show that the proposed method applied to the Columbia image dataset possesses promising capability in splicing detection, and it can achieve a detection accuracy of 89.91%.
  • LIU Zhi-yuan, CHEN Yao-wu
    Computer Engineering. 2014, 40(1): 239-245. https://doi.org/10.3969/j.issn.1000-3428.2014.01.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes an encoding method with improving compression efficiency based on the standard of JPEG XR image. This method designs an adaptive quantization parameters selection algorithm based on image content by using the perception features of Human Visual System(HVS). According to the Just Noticeable Difference(JND) model, the macro blocks in the process of image compression are divided into 6 types by local texture and local brightness. Each type is assigned different quantization parameters of Direct Current(DC), Low Pass(LP) and High Pass(HP) coefficients adaptively, which distributes the bit rate of the entire image reasonably according to the texture complexity and brightness. Therefore, higher compression efficiency and lower rate are achieved with the same subjective quality. Experimental result show the proposed algorithm obtains a 10% higher compression efficiency compared with fixed quantization parameters algorithm.
  • DU Yan-zhen, SUN Feng-rong, LI Kai-yi, SONG Shang-ling, JIN Xin
    Computer Engineering. 2014, 40(1): 246-249,262. https://doi.org/10.3969/j.issn.1000-3428.2014.01.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem that Synthetic Focusing(SF) ultrasound imaging method, which needs dealing with large amount of data in image reconstruction process, is restricted when applied into portable B-mode ultrasound equipments, this paper proposes a portable B-mode ultrasound imaging method named SFCS based on compressive sensing theory and SF beam forming algorithm. By using traditional SF beam forming method, subsampled RF lines can be acquired from the subsampled measurement signal that is random sampled from raw RF echo signal. Based on compressive sensing theory, RF lines can be reconstructed from the subsampled ones for subsequent imaging processing. Simulation result shows that SFCS can effectively address the data size problem that SF ultrasound imaging always confronts. Since SFCS can meet the requirement that portable B-mode ultrasound equipment must be miniaturization and low cost, it has practical engineering application value.
  • LIU Zhong-min, HU Wen-jin, LI Zhan-ming
    Computer Engineering. 2014, 40(1): 250-253,267. https://doi.org/10.3969/j.issn.1000-3428.2014.01.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the context of inpainting based on Morphological Component Analysis(MCA), the imposition of a total variation penalty is useful, particularly well in recovering piecewise smooth objects, but easy to produce staircase. A new image inpainting method based on morphological component analysis is proposed. The proposed algorithm utilizes p-Laplace operator in the information spread not only along the edge direction, but also in gradient direction, which not only preserves edge, but also avoids staircase in the smooth area, and at the same time, the result is also less sensitive to noise. Experimental results for Tangka image which contains scratch and block loss show that the proposed method achieves better inpainting effect.
  • SUN Li-hui, LI Jun-shan, LU Mei-ling
    Computer Engineering. 2014, 40(1): 254-257,271. https://doi.org/10.3969/j.issn.1000-3428.2014.01.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of difficulty in denoising owing to unknown transcendent message and complex noise components, a mixed domain denoising algorithm of turbulence degraded image is put forward. A mixed noise detection algorithm aiming at every noise components is designed. Improved denoising methods are used to wipe off settled-valued and random-valued impulse noise in spatial domain. Threshold disposals based on disassembly of Nonsubsampled Contourlet Transform(NSCT) are done to wipe off Gaussian noise in transform domain, and a cost function is designed to implement adaptive disaggrega-tire layers. Noise cycle-detection is done, and an iterative washed-up condition is set to achieve adaptive algorithm. Simulation experiments show that noise detection performance and denoising performance is better. and detail information of degraded image is resumed. Meanwhile, the complexity of the algorithm is low and the real-time is better. All above make sure this algorithm satisfy the need of denoising of aero-optics degraded image.
  • MA Teng, CHEN Shu-qiao, ZHANG Xiao-hui
    Computer Engineering. 2014, 40(1): 258-262. https://doi.org/10.3969/j.issn.1000-3428.2014.01.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve the problem of too much memory usage in existing work for high speed large volume multi-field packet classification, an improved HyperSplit algorithm is proposed. By analyzing the cause of too much memory usage, the heuristic algorithms are modified and designed to choose the cutting points and dimensions and eliminate redundancy. Rule replication is greatly reduced, redundant rules and nodes are removed, and the decision tree’s structure is optimized. Simulation results demonstrate that compared with the existing work, independent of rule base’s type and characteristic, the algorithm can greatly reduce memory usage without increasing the number of memory accesses and ensure that packets can be processed at wire speed, and when the volume of classifier is 105, the algorithm consumes about 80% memory usage as that of HyperSplit.
  • ZHA Xiu-qi, WU Rong-quan, GAO Yuan-jun
    Computer Engineering. 2014, 40(1): 263-267. https://doi.org/10.3969/j.issn.1000-3428.2014.01.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Rise of Internet technology, resulting in more and more enterprises begin to migrate from C/S mode to B/S mode. In connection with traditional software reuse technology, which makes a deep analysis to the existing C/S mode system. This paper presents an implementation of converting application software from C/S mode to B/S mode. It uses the virtual application technology and .NET framework UIA technology, taking XML file as information exchange carrier, to make browser achieve remote operation on C/S software, which needs zero understanding of the C/S software and makes the use of existing resources to complete the development of B/S structure, and modifies the code of the C/S software to zero. It fully reflects the pattern of platform independence, and posses the features of interface uniformity, highly scalable, easy maintenance. Experimental results show that the method can be successfully applied in calculator program under Windows XP.
  • WANG Jing-xiong, TIAN Xiang, HU Yin-feng
    Computer Engineering. 2014, 40(1): 268-271. https://doi.org/10.3969/j.issn.1000-3428.2014.01.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the algorithm of motion detection in video surveillance system is easily being affected by movements and noises in video background, and the result of salient motions is not accurate enough, an improved algorithm based on H.264 encoding is proposed to solve it. The algorithm is based on the Sum of Absolute Difference(SAD) of encoded macro-blocks, which are produced in the motion estimation process of H.264 encoding, then calculates its variance as the indicator for motion, and an auto-adjust threshold is used in the algorithm at the same time to decrease the effect of movements and noises in background. Experimental results show that the detection can decrease the effect of background movements and noises and make the salient motion detection more accurate, and keep the property of low complexity, which is 18.3% of frame differ algorithm, to make itself more suitable for real-time video surveillance system.
  • YANG Ze-xue, HAO Zhong-xiao
    Computer Engineering. 2014, 40(1): 272-274,279. https://doi.org/10.3969/j.issn.1000-3428.2014.01.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to solve continuous reverse nearest neighbor query of moving points in dynamic environment, continuous reverse nearest neighbor query is divided into monochromatic and bichromatic study. By using Voronoi diagram of moving points, monochromatic continuous reverse nearest neighbor and bichromatic continuous reverse nearest neighbor query algorithm are given. The relevant theorem and proof of the algorithm’s correctness and termination are given, and its time complexity analysis is presented. According to whether the topology of the Voronoi diagram of moving points changes, it has two categories: change and no change. By analysis of candidate region changes in each category, Voronoi diagram is reconstructed in change region, and corresponding solutions are given. In most cases, the query results can be found only needing generate Voronoi diagram of local moving points. It can reduce the cost of continuous reverse nearest neighbor queries.
  • LUO Ming-wei, YAO Hong-liang, LI Jun-zhao, WANG Hao
    Computer Engineering. 2014, 40(1): 275-279. https://doi.org/10.3969/j.issn.1000-3428.2014.01.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the matter that the current level classification community algorithms are difficult to select the appropriate initial nodes lead to poor result for community structure divided, this paper proposes a layer partition algorithm based on hierarchical level to select core nodes. Algorithm screens the cores by evaluation criteria of degree and closeness. In order to overcome the shortage that dissimilarity index measures the similarity of nodes within a community, introducing node dissimilarity to evaluate the similarity between initial cores and the initial node set with higher node similarity. By adopting strategy of global optimization modularity and realizing community division in complex network, when applied in standard datasets, experimental results show that, compared with GN algorithm, FN algorithm, the proposed algorithm has a better classification effect and lower time complexity.
  • ZOU Zhi-bin, LI Yun, ZHANG Xiao-xian
    Computer Engineering. 2014, 40(1): 280-282,286. https://doi.org/10.3969/j.issn.1000-3428.2014.01.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For implementation of data system of TTCN-3, it is a mandatory request to support several features, e.g. type capability, while in accordance with standards of TTCN-3. In order to solve this problem, this paper gives a translation scheme from TTCN-3 into Java, which utilizes Java’s object-oriented natures such as inherence and polymorphism, and refers the abstract factory pattern. To inspect and analyze the outcome code, it is concluded that this scheme makes a clear functional separation of data type and value. Besides, this scheme supports compatibility between different data types and comparison between different data values, and is easy to be extended.
  • CHEN Peng, CAO Jian-wei, CHEN Qing-kui
    Computer Engineering. 2014, 40(1): 283-286. https://doi.org/10.3969/j.issn.1000-3428.2014.01.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In terms of parallel decoding H.264 video stream problems, this paper builds CPU/GPU cooperative computing model to accelerate video encoding and decoding computing. This model uses Compute Unified Device Architecture(CUDA) language as GPU programming model, proposes and implements DCT inverse conversation and intra-frame prediction in a GPU accelerated computing. In the premise of maintaining higher calculation accuracy, combined with CUDA mixed programming, improves the computational performance of the system greatly. The algorithm uses CUDA language provided by NVIDIA, and realizes the DCT inverse conversation and intra-frame prediction on GPU. The experiment compares the parallel algorithm and the sole CPU, and verifies the accelerating effect of the parallel decoding algorithm by using different number of video streams. Experimental result shows that this system improves the video streaming codec efficiency, and it can accelerate 10 times faster than the average CPU sole calculation.
  • HU Lian-da, ZHANG Ji
    Computer Engineering. 2014, 40(1): 287-290,294. https://doi.org/10.3969/j.issn.1000-3428.2014.01.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Original GTK+on DirectFB graphics system optimization for hardware acceleration is not enough. Performance of graphical user interface developing in domestic embedded system is low, so this paper proposes a method of graphics system performance optimization. Optimizing graphics component storage allocation strategies can solve the problem that CPU accesses local memory and video memory in different speed. Using underlying extension methods can improve the execution efficiency of extended drawing instruction such as fill ellipse and fill polygon. Test data show that component storage allocation optimization strategies can increase the execution speed of CPU instruction by several times when opening hardware acceleration, and underlying drawing instruction extension methods can provide five times higher hardware acceleration rate than traditional application layer extension methods.
  • ZHOU Yi, ZHANG Xiao-xian, CHEN Li-rong
    Computer Engineering. 2014, 40(1): 291-294. https://doi.org/10.3969/j.issn.1000-3428.2014.01.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    System adapter is a key component of AUTOSAR conformance test framework. In order to solve the problem that tester and system under test run in different environment, a new method of designing system adapter is proposed by transferring the invoking action to a message sending action. By this method, the system adapter is divided into SUT adapter and target adapter which run on different platform and interact with each other abiding to a module independent message format. Selecting LinSM as test target, using standard test scripts provided by AUTOSAR, the system adapter is implemented. The tests indicate that all 30 test cases reach the expected conclusion by this method, which proves that it is feasible to execute the conformance test under heterogeneous environment by this method.
  • XIE Zhi-qiang, ZHENG Fu-ping, ZHU Tian-hao, ZHOU Han-xiao
    Computer Engineering. 2014, 40(1): 295-300,304. https://doi.org/10.3969/j.issn.1000-3428.2014.01.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the production condition that two workshops have the same equipment resources, integrated scheduling algorithm with equalization processing of schedulable processes in two workshops is put forward considering the issue of product completion time and the number of processes moving as little as possible in the two workshops. In order to reduce the completion time of the single complex product, the algorithm considers the flexibility and parallelism of schedulable processes and the condition of two workshops having the same equipments, using workshops equilibrium strategies to group the schedulable processes. In order to reduce the number of processes moving, assigning the processes to workshop and dispatch them in accordance with proposed determination of processes workshop. Example results show that the algorithm can achieve integrated scheduling of the two workshops and product completion time as short as possible and the number of processes moving as little as possible in quadratic complexity.
  • WANG Guan-jun, TONG Min-ming, ZHOU Yong, ZHAO Ying
    Computer Engineering. 2014, 40(1): 301-304. https://doi.org/10.3969/j.issn.1000-3428.2014.01.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper researches the sequential circuit equivalence verification problem. Bounding equivalence verification method based on Mining-Sequential Equivalence Checking(SEC) is put forwarded. To be verified sequential circuit is expansion to a set of Polynomial Symbolic Algebra(PSA) representation in accordance with time frame. The invariants and global constraints are mined over the expression database according to time series. The invariants can be arbitrary polynomial. Moreover, the approach can also mine the illegal constraints and complex polynomial relationship, with this the solving space is pruned dramatically. The equivalence verification approach based on Satisfiablity Module Theory(SMT) engine is proposed. Experimental result shows that the approach can realize rapid convergence, 1~2 order of magnitude verification speedups is achieved and false verification is eliminated effectively.
  • WANG Guo-hui, ZHANG Xiao-yu, GUAN Yong, LIU Yong-mei
    Computer Engineering. 2014, 40(1): 305-308,314. https://doi.org/10.3969/j.issn.1000-3428.2014.01.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the case of limited Field Programmable Gate Array(FPGA) resources, this paper proposes an optimization method of SpaceWire redundancy. By adjusting reliability of functional sub-modules, this paper realizes backup optimization for the whole system and improves the reliability. Based on the failure rate λ, the reliability of sub-modules is calculated. According to the score distribution method, respectively, the reliability index is calculated for each module to meet the requirements of overall system reliability. At the same time, from the view of mathematics, the optimal solution is obtained from a mathematical point of view to arrive at the best backup allocation scheme for the various functional sub-module redundancy designing based on the theory of nonlinear programming. Simulation results prove that the method can meet the reliability requirement under the premise to save FPGA resource on chip.
  • PENG Cong, CHAI Xiao-li, YU Xin-sheng, LI Hong-hai
    Computer Engineering. 2014, 40(1): 309-314. https://doi.org/10.3969/j.issn.1000-3428.2014.01.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to accelerate the car electric bus communication rate in the integrated architecture, combined with the requirements of processor project on integration-platform, with the combination of dual bus and complementary on the function and collocation of CAN and FlexRay, a Field Programmable Gate Array(FPGA)-based bus interface unit is designed. The module function of the internal CAN bus controller, FlexRay bus controller, RapidIO bus interface and Ethernet through FPGA is completed. The control and extension of high speed interface is realized, and the module interface configurable is made. Test proves that the CAN interface and FlexRay interface work properly under the specified baud rate, and each performance index meets the project requirement.
  • JI Yu-chen, FU Xiao, SHI Jin, LUO Bin, ZHAO Zhi-hong
    Computer Engineering. 2014, 40(1): 315-320. https://doi.org/10.3969/j.issn.1000-3428.2014.01.068
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to characteristics of computer intrusion forensic evidence, such as easy revise, easy loss, numerous sources, multifarious content, this paper discusses the current developing states about intrusion event reconstruction, analyzes intrusion event reconstruction source from the system layer object/event and the operate system layer object/event, and introduces the main intrusion event reconstruction tools. It reviews the existing methods for intrusion event reconstruction, including log analysis based on timestamp, semantic integrity checking, tracking technologies based on operate system layer object, event reconstruction models based on finite state machine and so on, evaluates their performance in terms of several metrics, such as reconstruction efficiency, false positives rate, credibility of evidence, authenticity of evidence, reconstruction environment, and summarizes the pros and cons of each method. Some important future research directions in the field of intrusion event reconstruction of computer intrusion forensic are discussed.