Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 December 2013, Volume 39 Issue 12
    

  • Select all
    |
  • QIN Na, JIN Wei-dong, HUANG Jin, LI Zhi-min, LIU Jing-bo
    Computer Engineering. 2013, 39(12): 1-4,10. https://doi.org/10.3969/j.issn.1000-3428.2013.12.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Mechanical fault of bogie seriously affects the security and comfort of the high speed train. Vibration signal of bogie and car body change with the fault occurrence, therefore, this paper proposes a fault diagnosis method of high speed train bogie based on Ensemble Empirical Mode Decomposition(EEMD). There are four typical working conditions in simulation experiment, such as air spring fault, yaw damper fault, lateral damper fault and normal condition. Vibration signal becomes several intrinsic mode functions after ensemble empirical mode decomposition. Energy moment feature is extracted to reflect the time distribution rule of energy. The 2nd to 6th energy moment are chosen to constitute 5-dimension eigenvector. In speed of 200 km/h, the Support Vector Machine(SVM) gets recognition rate. Simulation experimental result shows that the correct recognition rate of this method can achieve more than 95%.
  • CHEN Qi-xiang, LI Mao-qing, LIN Jun-ting
    Computer Engineering. 2013, 39(12): 5-10. https://doi.org/10.3969/j.issn.1000-3428.2013.12.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The trains in the railway section communicate with each other by track circuit or base station indirectly, which has too many nodes and low reliability, also brings safety risk. In order to realize direct communication of train-to-train and the need of railway collision avoidance system, the train-to-train direct communication scheme is proposed, the ultrashort wave band, the maximum communication distance between trains and the structure of transceiver are selected for railway application requirements, also the feasibility of this technology is described. The link of the direct train-to-train communication is analyzed, the budget model of path loss of station and section are given, the calculated results under different conditions of the receiver power also with simulation are illustrated. Analysis result shows that the proposed technology can be realized. The fading in communication link is discussed with the character of train running environment, the result shows that the multipath propagation and Doppler effect lead to serious signal fading.
  • GUO Zi-gang, ZHAO Jian-bo, NI Ming
    Computer Engineering. 2013, 39(12): 11-16,21. https://doi.org/10.3969/j.issn.1000-3428.2013.12.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Train speed detection and positioning are key technologies for improving the safety and efficiency of train operation. According to the domestic and foreign research in this field, a train detection and positioning system is designed based on embedded processor and multi-sensor information fusion. An axle speed sensor, a Doppler radar speed sensor, an acceleration sensor and query bails are employed to collect train status information. Federal Kalman filtering and the multi-sensor information fusion method are used to process these information in an embedded system. Problem of errors caused by train wheel diameter wearing, idling, sliding and other factors in tradition system is solved. Simulation results in Matlab show that the system can effectively improve the precision of train speed detection and positioning.

  • ZHU Qin-yue, BAO Shi-jiong, TAN Xi-tang, WANG Dong-xiang
    Computer Engineering. 2013, 39(12): 17-21. https://doi.org/10.3969/j.issn.1000-3428.2013.12.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    On analysis of the braking force distribution strategy in existing Electric Multiple Unit(EMU) electro-pneumatic braking control system, an optimization strategy of electro-pneumatic braking cooperative control is proposed, which aims at solving the imbalance problem of the motor car and trailer car’s electro-pneumatic braking. It uses electro-pneumatic braking priority control principle in EMU, and distributes air braking force of motor car and trailer car in inverse ratio on the basic of load. By modeling the EMU electro-pneumatic cooperative braking control and braking force distribution optimized algorithm while taking one motor car and one trailer car in CRH2 EMU as the basic unit, the Matlab/Simulink software is used to simulate different braking condition. The results indicate that the electro-pneumatic cooperative braking control optimization strategy which based on the load inverse-proportion for braking force distribution has the significant effect on improving brake efficiency, reducing wheel tread wear of the motor car and the trailer car.
  • WANG You-zhao, HUANG Dong
    Computer Engineering. 2013, 39(12): 22-26. https://doi.org/10.3969/j.issn.1000-3428.2013.12.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The importability of online microcomputer anti-disoperation system makes it difficult to propagate in domestic electric power system. In view of problem that microcomputer anti-misoperation system portability is not high, this paper proposes a portability improvement method of microcomputer anti-misoperation system based on the frame theory. It describes the parameters and information of devices and logic used in the system, builds and optimizes a revisable and portable model of the frame network data structure. Through the combination of frame theory and object-oriented language Visual C++, it builds the software based on this structure. The outcome of tests shows that this method can shorten the search time to 3.9 s, and can improve search accuracy to 99.5%. It increases the efficiency of the storage, retrieval, application and modification of information in the computer, and saves the time needed for grafting the system.
  • YAO Nian-min, DIAO Ying, HAN Yong
    Computer Engineering. 2013, 39(12): 27-30. https://doi.org/10.3969/j.issn.1000-3428.2013.12.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The existing Unified Level-aware Caching(ULC) protocol can effectively solve the problems of redundantly cache blocks in multilevel hierarchy and weaken locality at storage server cache. However, when there are multiple application servers sharing one storage server, the ULC method during allocating cache capacity cannot gain the maximal marginal profits of the storage server cache. So a second-level cache dynamic allocation strategy called MG-ULC is proposed, and it is designed for storage servers in which multiple applications share the same cache resources. Based on the ULC protocol, the theoretical foundation of cache allocation is given for marginal gain factor , and the MG-ULC dynamically allocates cache capacity in accordance with the second-level cache marginal gain of each application's access pattern. Experimental results show that, as each application’s access pattern changes, the MG-ULC can allocate second-level cache more rationally than the ULC, thereby realizing a higher cache utilization rate.
  • TANG Jia-xing, CHEN Yao-wu, JIANG Rong-xin
    Computer Engineering. 2013, 39(12): 31-34,48. https://doi.org/10.3969/j.issn.1000-3428.2013.12.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The current file system storage solution for network video surveillance system suffers from low storage efficiency and poor retrieval performance. Aiming at these drawbacks and depending on the characteristics of the data storage, a dedicated video storage solution for network monitoring video streams is proposed. The proposed approach provides a logical storage structure and corresponding data access method based on bare disk devices, which includes Group of Pictures(GOP) based data caching mechanism and uses B+ tree for video segment index information management. System test results show that compared with the traditional file system, the solution increases video storage efficiency by 43.6% and 30.3% respectively on typical storage bitrate of 512 Kb/s and 1 Mb/s by the surveillance system, while reducing time-consuming of video retrieval down to 35 ms or less.
  • XIAO Yue-zhen, HUA Bei
    Computer Engineering. 2013, 39(12): 35-39,53. https://doi.org/10.3969/j.issn.1000-3428.2013.12.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To work out performance bottleneck relating to memory accessing in forwarding path of high-speed software router, after introducing the forwarding frameworks of two software routers——PacketShader and Netmap, and analyzing their problems, this paper presents MapRouter, a zero-copy forwarding framework based on multi-core processors. MapRouter eliminates packet copying using zero-copy technology, and solves the problem of packet buffer management among multiple ports based on currently lock-free First In First Out(FIFO). By exploiting a series of optimization techniques including highly-optimized packet I/O driver, efficient packet buffer recycling mechanism, and high-efficient lock-free FIFO queue implementation, MapRouter achieves 10 Gb/s minimal forwarding(without IP address lookup) throughput on a two-port software router, which is higher than that of PacketShader and Netmap, meanwhile it has much lower CPU utilization ratio.
  • PENG Jun, LI Fu-hai, LUO Qi-wu, XIAO Xiang-hui
    Computer Engineering. 2013, 39(12): 40-44,59. https://doi.org/10.3969/j.issn.1000-3428.2013.12.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the high requirement for storage bandwidth and capacity in the high speed data acquisition system, a design of high-speed multi-channel solid state storage system based on Field Programming Gate Array(FPGA) is proposed. With the FPGA device XCV5LX110T as the core and large-capacity high-speed NAND Flash memory as the storage medium, it improves the data throughout bandwidth from the hardware perspective through constructing a high speed multi-channel storage architecture in FPGA by using parallel bus expansion and pipeline buffering technology. In order to improve the storage parallelism from the software perspective, an address mapping scheme which is based on super page is used to optimize the parallelism of the request process mechanism in the Flash Translation Layer(FTL) algorithm. Test result shows that the maxim storage speed of this system can be 73 MB/s which can meet the desire of the high speed data acquisition system and proves that this multi-channel storage architecture and the FTL algorithm have excellent parallelism and scalability.
  • TIAN Xin-ji, LI Ya, SONG Cheng
    Computer Engineering. 2013, 39(12): 45-48. https://doi.org/10.3969/j.issn.1000-3428.2013.12.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the contradiction of Multi-input Multi-output(MIMO) with single-polarization antennas and limit of mobile terminals, Semi-orthogonal Algebraic Space-time(SAST) coding is applied to MIMO system with dual-polarized antennas, and its performance is analyzed in theory. Permutation matrix and Cross-polarized Discrimination(XPD) are regarded as the part of effective channel, SAST coding with dual-polarized antennas is converted to that with dual-polarized antennas. Therefore, the effect of permutation matrix and XPD on diversity gain and coding gain is analyzed according to rank criterion and determinant criterion, and the decoding method is investigated. Simulation results demonstrate the validity of this theoretical analysis.
  • FU Jian-bin, PENG Hua, DONG Zheng
    Computer Engineering. 2013, 39(12): 49-53. https://doi.org/10.3969/j.issn.1000-3428.2013.12.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the traditional adaptive filtering algorithm can not effectively estimate the channel directly, a new frequency domain algorithm is proposed. With Frequency Domain Least Mean Squares(FD-LMS), it weakens the effect of the sparse character of the channel. It can change the sparse channel into nonsparsity by Fast Fourier Transformation(FFT), and the adaptive filtering algorithm estimates the channel directly. Simulation experimental results show that FD-LMS possesses excellent character of convergence, its convergence rate is almost equal to frequency domain RLS algorithm, however, its Mean Square Error(MSE) improvesby nearly 10 dB. Therefore, FD-LMS can estimate the sparse channel well, and the calculation for estimating the channel can also be reduced with overlap-save method.
  • YU Lei-lei, CHEN Dong-yan, HUANG Xu, QIN Shao-hua
    Computer Engineering. 2013, 39(12): 54-59. https://doi.org/10.3969/j.issn.1000-3428.2013.12.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to handle the path breakage in the routing of mobile Wireless Sensor Network(WSN), a node-disjoint multipath routing algorithm based on the HSV color space is proposed. The algorithm creates a numeric (h, s, v) tuple for each link in the network and distributes these tuples into six basic planes in the color space, then it can find multiple node-disjoint paths within different basic color planes. It designs disjoint multipath routing maintenance mechanism based on variable intervals for mobile nodes link Received Signal Strength Indicator(RSSI) value detection, which is without any geographic location information. Experimental results show that when using three paths to transmit, the data transfer success rate of the proposed algorithm can achieve 80% above, and other contrastive classic algorithms are all less than 70%. In addition, it also has good performance on the network throughput and energy consumption .

  • XU Hui-bin, XIA Chao
    Computer Engineering. 2013, 39(12): 60-64,69. https://doi.org/10.3969/j.issn.1000-3428.2013.12.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In Vehicle Ad hoc Networks(VANET), due to its characteristic with high dynamic topology and high-speed mobile, the path frequently ruptures. Therefore, route algorithm based on stable path is proposed. The mobile information of node is used to predict the Link Expiration Time(LET) and link is discovered by the same direction nodes. The most LET is considered to build the path, which makes path stable. Furthermore, control overhead is reduced. Simulation results show that the stability of routing is enhanced and throughput is improved in the proposed route scheme.
  • DAI Hui-jun, QU Hua, ZHAO Ji-hong
    Computer Engineering. 2013, 39(12): 65-69. https://doi.org/10.3969/j.issn.1000-3428.2013.12.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This Quality of Service(QoS) routing is one of the key issues in the research of the overlay network. As to the Multiple Constrained Balanced Path(MCBP), a routing algorithm is proposed based on analyzing multiple QoS constraints. It solves the weights allocation among multiple QoS constraints by introducing Analytic Hierarchy Process(AHP) and parameter normalization methods including multiple QoS parameters of nodes and links considered. Meanwhile, it balances the QoS parameters bandwidth and nodes capacity according to the features of the overlay network, results show that MCBP is better than other similar algorithms in balancing the network resources and all the QoS parameters are equally considered.
  • GUO Qing-tao,SUN Qiang-qiang,LI Yong-pan,YU Xiao-jun,ZHENG Tao
    Computer Engineering. 2013, 39(12): 70-74,78. https://doi.org/10.3969/j.issn.1000-3428.2013.12.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a brand new light-weight high performance server framework(LHP-Svrframe) for new generation server development, whose intension is for building a high performance server of massive connection and massive data. The difference from current frameworks is LHP-Svrframe focuses on TCP stack and process model and does especial optimization design, such as load balancing in NIC interrupts, dynamic adjust size of the congestion window, and optimization of the delayed ACK mechanism, et al. Experimental results in the end prove that the LHP-Svrframe performs much better than its counterparts like Apache, Lighttpd and ACE, with four times even eight times better.
  • WU Xin-wei, FU Zhong-man, ZHANG Jian-xiong
    Computer Engineering. 2013, 39(12): 75-78. https://doi.org/10.3969/j.issn.1000-3428.2013.12.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To address the issue that the link switch time of gigabit Ethernet redundant Network Interface Card(NIC) is too long and difficult to meet the application requirements, a kind of hardware detection method is proposed. It is a new gigabit redundant Ethernet link status detection method on the basis of “heartbeat” frame. This method adds the network status detection module used for the organization of “heartbeat” frame through the improvement of MAC controller, and classifies the Ethernet into two types based on the network latencies: Delay network and No-Delay network. In addition, it can detect the broken link rapidly and shorten the link switch time according to the real-time monitoring to redundant link by “heartbeat” frame and statistical analyses to “heartbeat” frame by MAC controller. Test results indicate that this detection method can rapidly detect the disconnected link and the link switch time is in 20 ms, and it can guarantee the reliability and stability of the network communication.
  • CHEN Yong-ling, HU Hong-lin, YANG Xiu-mei
    Computer Engineering. 2013, 39(12): 79-82,86. https://doi.org/10.3969/j.issn.1000-3428.2013.12.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A method is proposed to estimate the interference and noise power using synchronization signal for Long Term Evolution (LTE)/Long Term Evolution-Advanced(LTE-A) systems. In this method, the channel coefficient of current subcarrier is estimated, and the estimated received signal of adjacent carrier is calculated based on the hypothesis that the channel coefficients of two adjacent carriers are nearly equal. Correlation of the difference value of the real received and the estimated value is performed with further expectation operations for estimation of interference and noise power. Simulation results show that proposed method performs better than conventional Cyclic Prefix(CP) algorithm in rich multipath fading scenarios, and the method can estimate interference and noise power well.
  • MA Long-bang, GUO Ping, ZHAO Juan, LI Jian-lin, YANG Fan
    Computer Engineering. 2013, 39(12): 83-86. https://doi.org/10.3969/j.issn.1000-3428.2013.12.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to investigate the impact of the cascading failure on the services performance in computer networks, considering the initial load, node forwarding rate and routing policy, a cascading failure model based on the load-capacity is established. Three evaluation parameters, throughput, load rate and service time delay, are used to measure the changes in the performance services before and after cascading failure. The simulation result in BA scale-free network shows that this model can reflect the phenomenon that cascading failure will decrease service performance. This model has the guiding significance for prevention and control of cascading failures.
  • ZHANG Xun-yan, XIE Jin-kui, JIN Yi-sheng, YANG Zong-yuan
    Computer Engineering. 2013, 39(12): 87-92,96. https://doi.org/10.3969/j.issn.1000-3428.2013.12.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the energy-inefficiency of taking routing control or topology control only in Wireless Sensor Network(WSN). This paper considers a combination and presents a new concept: virtual node, gives the distribution of virtual wireless sensors in the detection area and a description of topology control based on virtual wireless sensor technology using the minimum cover approximation algorithm, and proposes a networking technology based on the above control technology. Experimental results show that, when node number is 1 500 and covering radius is 80, 85 and 110, the transmission number under the method is 15.7, 12.0, 18.1 times better than that under EOLSR, and the algorithm can significantly improve the energy-saving utility.
  • XU Tong-yang
    Computer Engineering. 2013, 39(12): 93-96. https://doi.org/10.3969/j.issn.1000-3428.2013.12.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the existing localization algorithms have the problem of low positioning accuracy and less applicable scenario, under the Non-Line-of-Sight(NLOS) environments, a localization algorithm based on Time of Arrival(TOA) in Wireless Sensor Network(WSN) is proposed in this paper. It uses the initial estimation of the unknown node as the reference point, the Taylor series expansion method is used to calculate the second estimation iteratively. The second estimation value is used to calculate the approximate distance between the unknown node and the anchor nodes. And this distance can be modified with the original TOA distance to determine the NLOS errors. The TOA measurement groups which are seen as containing the large NLOS error can be removed. The Taylor method is used to get the precise position of the unknown node based on the rest of the TOA measurements after correction processing. Simulation results show that the proposed location algorithm can restrain NLOS error effectively, and has better location accuracy than the traditional location algorithms.
  • HUANG Zhan-hua, LIAO Ke, CAI Huai-yu
    Computer Engineering. 2013, 39(12): 97-101. https://doi.org/10.3969/j.issn.1000-3428.2013.12.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A method of high precise time synchronization with hardware for Wireless Sensor Network(WSN) based on Zigbee technology is proposed to realize synchronous functions. Based on the method of cross-layer design, Received Signal Strength Indicator(RSSI) signal is extracted to be acted as synchronous trigger signal. With the use of Complex Programmable Logic Device(CPLD) hardware circuit, a design of time module for real-time demand in the system is achieved for timing. With the software synchronous algorithm, the system can achieve the synchronization function. Problems about the contradictions between low power, low cost and high accuracy in the traditional time synchronization in sensor wireless networks are solved. Theory analysis and experiments about system precision are done to show that time accuracy between nodes can reach the range of 10 μs and the synchronization met most needs of WSN.
  • LIN Pei, YANG Yi, CHEN Yi-piao, DENG Yu-bo
    Computer Engineering. 2013, 39(12): 102-106. https://doi.org/10.3969/j.issn.1000-3428.2013.12.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The 2D mesh adaptive fault-tolerant routing algorithm based on cracky fault block is an effective fault-tolerant algorithm. It not only can solve the problem of live lock, but also can overcome the drawback that good nodes in the faulty block can not be used to take part in normal routing. However, this algorithm still has explicit disadvantages that its routing path is not the shortest due to its need to traverse every interior spanning tree while passing the cracky fault block. This paper proposes an adaptive fault-tolerant routing-table algorithm aiming at the problem of the shortest routing length. The table is created in the tree nodes memory and preserves the useful information to decide whether to traverse the interior tree or not. Experimental results prove that the proposed algorithm can reduce the average length of routing path by 70% with the expansion of mesh grid, and the algorithm is easy to implement and can significantly prolong networks lifetime.
  • TAN Jin-yong, YANG Zhong-liang
    Computer Engineering. 2013, 39(12): 107-110,117. https://doi.org/10.3969/j.issn.1000-3428.2013.12.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Many single-channel and mufti-channel Medium Access Control(MAC) protocol can not satisfy the data rate and delay for large-scale data collection applications. To resolve this problem, a new multi-channel-based fast data collection MAC protocol is proposed, which takes advantage of mulch-channel and Time Division Multiple Access(TDMA) to eliminate interference between nodes, considers half-duplex communication mode and fairness of data collection during the time-slot assigning, and increases parallel transmissions by maximizing the spatial channel reuse. During the process of timing scheduling, the introduction of time delay and energy consumption balance factor enhances the flexibility of protocol, and it can be applied to different applications by adjusting the balance between time delay and energy consumption. Simulation results show that this protocol has the performance of high throughput and low time delay for large-scale data collection applications.
  • GUO Hao, DONG Xiao-lei, CAO Zhen-fu
    Computer Engineering. 2013, 39(12): 111-117. https://doi.org/10.3969/j.issn.1000-3428.2013.12.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most identity-based schemes are based on the bilinear pairing, which has a high computational complexity and seriously reduces the efficiency of the cryptographic schemes. Aiming at this problem, this paper proposes a ring signature scheme without paring. By introducing a new technique of how to calculate the 3lth root of a cubic residue in Eisenstein ring, which is applied to calculate ring signature keys as well, a new identity-based ring signature scheme is proposed based on cubic residues. This scheme is formally proved that it is chosen message and identity secure in the random oracle model, assuming the hardness of factoring. The proposed scheme is also been proved to meet the signer unconditional anonymity.
  • GUO Lei, ZHENG Hao-ran, LIU Ming-wei
    Computer Engineering. 2013, 39(12): 118-121. https://doi.org/10.3969/j.issn.1000-3428.2013.12.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    0-1 invertible matrice which has the largest branch number is widely used in the design of diffusion structures in block ciphers. In view of how to construct such 16×16 matrix, this paper divides 16×16 matrix into 4×4 block matrix by 4×4 0-1 matrix as a unit. Using the weight distribution peculiarity of the sum of 4-dimensional 0-1vectors with weight 2 in field of characteristic 2, it constructs 4×4 0-1 matrix unit group with some special structures in the permutation of isomorphism. On the basis of the structure characteristic of Hadamard matrice, it presents the methods of constructing 16×16 invertible 0-1 matrice with maximum branch number 8 using the matrix block construction method. Further, it presents the methods of constructing 16×16 involutory 0-1 matrice with maximum branch number 8 and their number in the permutation of isomorphism.
  • XIE Dan, YANG Bo, SHAO Zhi-yi, XU Yan-jiao, DU Jun-qiang
    Computer Engineering. 2013, 39(12): 122-125. https://doi.org/10.3969/j.issn.1000-3428.2013.12.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The equation test is an important part in Security Multi-party Computation(SMC). It has important application in the fields of data mining, recommendation service, online dating service, and medical database. According to the defects existing in the protocols of comparing two data based on security under the semi-honesty model, this paper proposes a secure computation protocol for two-party numbers equality test in the malicious model. The protocol uses the public-key encryption mechanism based on lattice Learning With Error(LWE) difficult problem and Paillier encryption scheme, it can prevent malicious attacks in the case of existing malicious attacker, and at the same time proves that agreement is safe under the malicious model. Analysis results prove that the protocol after the implementation is completed, and no private information in both communication parties is revealed. Compared with the protocols of comparing two data based on security under the semi-honesty model, the proposed protocol can effectively resist the attacks from the malicious adversary and provides a good solution for the communication with high needs.
  • XIE Rong-sheng, ZHAO Huan-xi, WU Ke-shou
    Computer Engineering. 2013, 39(12): 126-129. https://doi.org/10.3969/j.issn.1000-3428.2013.12.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Quick Response(QR) barcode anti-counterfeiting technique based on digital watermarking provides low watermarking capacity, and its anti-counterfeiting performance is poor. To solve the problem, Rand-graying method and background-image-graying method are proposed for QR binary images to increase watermarking capacity. The graying extent for QR binary image is determined by the graying threshold which can be selected according to specific application. Based on the graying methods, the anti-counterfeiting QR 2D barcode watermarking scheme based on Discrete Wavelet Transform(DWT) is designed. In the proposed scheme, watermarks embedding as well as watermarking detection are made by a quantization function. In addition, the positions for watermarking are determined by 2D chaos sequence generated from well designed chaos keys. Experimental result shows that the proposed watermarking scheme for QR 2D barcode improves anti-counterfeiting performance without loss of any bar code information.
  • XU Liang, TAN Huang
    Computer Engineering. 2013, 39(12): 130-135. https://doi.org/10.3969/j.issn.1000-3428.2013.12.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the《Information Security Technology——Security Techniques Requirement for Operating System》, formal security policy model is required in the development of access verification and protection level of security operating systems. Aiming at the situation, an improved BLP model which has multi-level security labels and security transition rules is proposed based on the traditional data confidentiality BLP model to satisfy the development of actual systems. The states, invariants, transition rules are described by formal method and theorem prover Isabelle is used to prove that all the rules hold the invariants in the model which guarantees the model’s reliability, and it realizes the automatic formal verification of model correctness.
  • ZHAO Qing-song, XU Huan-liang
    Computer Engineering. 2013, 39(12): 136-140. https://doi.org/10.3969/j.issn.1000-3428.2013.12.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Garbled circuit can be sought for the protection of input and circuit privacy of user in delegation computation. After malicious worker is cheated by answering output which is the label output in previous computation when garbled circuit is reused, the security of computation is compromised. A Delegation computation scheme based on re-randomizable garbled circuit is proposed to solve the not be reusable problem of circuit. Taking advantage of additively homomorphic property of BHHO scheme to map 0-1 vectors to 0-1 vectors of the same length performed by two known affine transformations on vectors over Zp, random bit permutations are applied to each wire of garbled circuit, and wire label and four pairs of ciphertexts of gate are re-randomized. Theoretical analysis results show the scheme can effectively solve the security of garbled circuit, and delegation computation provides input and output privacy for client and verifiability of results.
  • ZHANG Chun-sheng, SU Ben-yue, YAO Shao-wen
    Computer Engineering. 2013, 39(12): 141-143,147. https://doi.org/10.3969/j.issn.1000-3428.2013.12.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The previous distributed ring signature schemes need bilinear pairing operation or exponent operation, and their computation efficiency is not high. For improving the efficient of operations, a new certificateless distributed ring signature scheme without bilinear pairings operation or exponent operation is proposed. The scheme only needs a modular multiplication on elliptic curves. The results of complexity analysis show that the proposed scheme is efficient, and it only needs 2s+3t–2 modular multiplication(t is the number of subsets of access structure, s is the number of members of actual signing subset). In addition, the scheme becomes a certificateless threshold ring signature scheme when the number of all subsets members of access structure is set to a certain threshold value.
  • LI Jing, LI Zhi-hui, WU Xing-xing
    Computer Engineering. 2013, 39(12): 144-147. https://doi.org/10.3969/j.issn.1000-3428.2013.12.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to realize verifiability and dynamic property of practical requirements for multi-secret sharing schemes with general access structures, this paper presents a dynamic multi-secret sharing scheme for arbitrary access structure, where each participant selects his own secret share, and sends it to the dealer without secure channel based on RSA cryptosystem. Meanwhile, on the basis of two-variable one-way function, pseudo secret share of each participant is calculated, and secret distributed algorithm and reconstructed algorithm are designed. Analysis result shows that in the reconstruction phase, each participant has only to present his pseudo-secret share to recover the secret without showing his real share, this scheme has the anti-fraud property, and it realizes the distribution of shares through public channel, which can reduce the cost of scheme.
  • WAN Bao-ji, ZHANG Tao, HOU Xiao-dan, ZHU Zhen-hao
    Computer Engineering. 2013, 39(12): 148-151,156. https://doi.org/10.3969/j.issn.1000-3428.2013.12.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The existing detection algorithms are difficult to obtain high detection accuracy when applied to the condition, in which the embedding algorithm of the stego-images is unknown. Therefore, this paper proposes a steganography-unknown image steganalysis method based on Boosting fusion. It obtains various classifying results by establishing steganography algorithm classifier models in the training phase, and acquires the performance of these classifies according to the Boosting algorithm. The final detection result is obtained by combinational rule based on probability output. The detection work is presented to attack the current different spatial domain and JPEG steganographic algorithms. Extensive experimental results show that this proposed method is effective for multi-steganographic algorithms, and Boosting takes advantage of the individual strengths from each detection system and whole detection performance is probably increased by 2%.
  • ZHU Ling-zhi, ZHAO Jin-guo, LIANG Jun-bin, LIU Zhi-xiong
    Computer Engineering. 2013, 39(12): 152-156. https://doi.org/10.3969/j.issn.1000-3428.2013.12.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problems that the traditional false data filtering schemes can not filter false data injection attacks from non-forwarding areas, this paper proposes a false data filtering scheme based on threshold mechanism. Each node establishes a path to Sink. Each data package includes t Message Authentication Code(MAC) of detecting nodes and two security threshold parameters. Each forwarding node not only checks the correctness of the MAC but also validates the security threshold parameters. Theoretical analysis and simulation experimental results demonstrate that this scheme can resist false data injection attacks from arbitrary areas, and consumes less energy than existing schemes.
  • ZHANG Jian, FAN Hong-bo, HUANG Qing-song, LIU Li-jun
    Computer Engineering. 2013, 39(12): 157-161. https://doi.org/10.3969/j.issn.1000-3428.2013.12.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Online exact single pattern string matching is widely used in almost all of fields involving text and symbol processing. SBNDMq is one of the fastest algorithms in this field. By introducing the unaligned 2-byte reading, SBNDMq is improved and a serial of algorithm named SBNDMq_Shortb is presented. The jump capability of this algorithm is same with SBNDMq, but the memory access reduces 50%. Thus, this algorithm is faster than SBNDMq. Experimental results show that SBNDMq_Shortb is faster than other well- known algorithms in many matching conditions on the platform.
  • TANG Jie, WEN Zhong-hua, HUANG Hai-ping, WU Zheng-cheng
    Computer Engineering. 2013, 39(12): 162-166. https://doi.org/10.3969/j.issn.1000-3428.2013.12.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In nondeterministic planning, it is feasible to distinguish some states through observation information, however, observation information is so much that it is significant how to select useful information from plenty of observation information. The previous algorithms use direct search algorithm that adds some cutting condition to achieve optimized objective, however, these methods have certain limitations. In research to observation information reduction, this paper designs a new algorithm which improves search efficiency. It converts the problem into cover problem in 0-1 matrix through abstract problem model, and replaces the 0-1 matrix using orthogonal list data structure and can get a minimal set of observation variable through maintaining the orthogonal list and using heuristic function. Experimental result shows that this algorithm not only finds a minimal set of observation variable, but also runs more quickly than similar algorithm.
  • ZHAO Cheng, WU Xi-sheng
    Computer Engineering. 2013, 39(12): 167-170. https://doi.org/10.3969/j.issn.1000-3428.2013.12.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the problem of corner detection at L-junction in image and calculate the corner angle at L-junction, this paper proposes a new algorithm based on improved Harris to detect the corner at L-junction and calculate the corner angle at L-junction. The algorithm calculates the sum of the weight on each edge in differential coefficients window, and takes the average value of all the differential coefficients as the representative differential coefficient, and measures the principal curvature on the detected corners and through the ratio influence function identify the corner at l-junction. According to the calculated values of the response function from the above two steps, it calculates the corner angle at L-junction. Experimental results show that angle calculation algorithm based on Harris corner detection improves recognition accuracy rate from 24.6% to 3.3%, and calculates the corner angle at L-junction rate from 10.21% to 3.82%.
  • XING Hong-yan, ZHANG Qian-sheng, ZHANG Chun-gui
    Computer Engineering. 2013, 39(12): 171-175,180. https://doi.org/10.3969/j.issn.1000-3428.2013.12.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As one of the important issues in artificial intelligence, uncertainty measuring methods are put forward and widely used in applications. Considering this, this paper introduces several existing definitions about similarity measure on Vague values(sets), along with the analyzing of their properties. Then it presents an effective matching definition of similarity measure between Vague values(sets). Further, it proposes a formula to compute the entropy of Vague values(sets) together with an axiomatizing idea on fuzzy entropy. All the measures between Vague values(sets) are validated with experimental data. The proposed method is improved and the simulation results show the actual effect on category recognition of Chinese drawing.
  • ZHI Chen-jiao, TANG Hui-ming
    Computer Engineering. 2013, 39(12): 176-180. https://doi.org/10.3969/j.issn.1000-3428.2013.12.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the real-time performance of traditional feature matching vehicle speed detection, this paper presents an improving Harris corner matching method. Based on video, it uses moving object detection and Harris corner detection, combines motion estimation and NCC template matching, optimizes matching area search strategy for corner coarse matching, then uses random sample consensus for corner fine matching and single view metrology coordinate transformation, and finally achieves vehicle speed measurement. Experimental results show that, compared with the traditional method, corner coarse matching speed is increased by four hundred percent, corner fine matching speed is increased by two hundred percent, the vehicle speed accuracy reaches more than ninety percent, and the new method successfully improved the real-time of the algorithm and the accuracy of corner matching.
  • HE Hong-zhou, ZHOU Ming-tian
    Computer Engineering. 2013, 39(12): 181-185,190. https://doi.org/10.3969/j.issn.1000-3428.2013.12.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existed Affinity Propagation(AP) clustering algorithm can not reflect the clustering structure of the complex protein sequences, This paper proposes an adaptive AP classification method based on Generalized SMS and Huffman Decision(adAP/GSHD). Protein sequences are clustered via generalized Substitution Matching Similarity(gSMS) and existed adaptive affinity propagation(adAP) algorithm. It uses Huffman coding and confines the average code length of clustering results to embody the family clustering structure of protein sequences. By experiment of test adAP/GSHD and comparing its performance with other four classic clustering methods on six datasets of Clusters of Orthologous Groups(COG) of proteins database and Structural Classification of Proteins(SCOP) database, results demonstrate that this method not only can acquire number of clusters more approximately to the correct family number of clusters and more compact clustering structure for a given set of proteins, but also the average F-measure is 19.67%, 8.7%, 9.5% and 43.81% better than that of adAP, SMS, Spectral Clustering and TribeMCL respectively.
  • LI Zhi-qiang, LIN Xiang-hong
    Computer Engineering. 2013, 39(12): 186-190. https://doi.org/10.3969/j.issn.1000-3428.2013.12.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the uneven distribution of population convergence and poor performance in global search of Non-dominated Sorting Genetic Algorithm II(NSGA-II), a multi-objective evolutionary algorithm, called K-means clustering non-dominated sorting genetic algorithm II(KMCNSGAII) is proposed with combining the theory and the existing algorithm. The KMCNSGAII uses K-means clustering technology and at the same time clusters both all the objective functions and individuals respectively. Then the learning and improvement method is used with respect to individuals after clustering. The KMCNSGAII algorithm is applied to several classical unconstrained and constrained test functions. Experimental results demonstrate that the KMCNSGAII achieves good results with performance evaluation about convergence indicator and diversity indicator, in convergence and diversity of population both are improved significantly compared with NSGA-II.
  • WANG Yun,1c, ZHU Jia-gang, LU Xiao, HUANG Ke-wang
    Computer Engineering. 2013, 39(12): 191-195,199. https://doi.org/10.3969/j.issn.1000-3428.2013.12.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Using Factored Principal Components Analysis(FPCA) feature extraction algorithm in the high resolution images has bad real-time performance and may cause dimension disaster because it needs to use iterative algorithm to implement FPCA algorithm. In order to solve the problem above, this paper developes a new method called Modular-FPCA(M-FPCA) for image feature extraction. This method modularizes the original digital image samples, implements FPCA algorithm on every sub-image matrix, and gets feature matrix of original image by merging sub-image features. Color images can be represented by three components of R, G, B. For existing shortcomings of color information fusion method, it combines M-FPCA with the improved color information fusion method and names it as color M-FPCA. Experimental results on CVL, FEI color face image library show that M-FPCA method can improve the realation of FPCA algorithm, solve dimesion disaster problems, color M-FPCA method extracts color information from color face image effectively, and has higher recognition rate.
  • ZHANG Wei, HUANG Wei, XIA Li-min, LUO Da-yong
    Computer Engineering. 2013, 39(12): 196-199. https://doi.org/10.3969/j.issn.1000-3428.2013.12.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the accuracy rate of pain expression recognition, a method is proposed based on Supervised Locality Preserving Projections(SLPP) and Multiple Kernel Linear Mixture Support Vector Machines(MKLMSVM). The SLPP using prior class label information is adopted for extracting feature of pain expression, which can solve the problem that LPP ignores the within-class local structure without the use of the prior class label information, and then MKLMSVM is employed for recognizing pain expression. Experimental results demonstrate that the accuracy of the proposed approach can reach 88.56%, and is significantly better than the Active Appearance Models(AAM), compared with normal Support Vector Machine(SVM), which can improve the interpretability of decision function and classifier performance.
  • XIONG Zhong-yang, LIN Xian-qiang, ZHANG Yu-fang, YA Man
    Computer Engineering. 2013, 39(12): 200-203,210. https://doi.org/10.3969/j.issn.1000-3428.2013.12.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There are both relevant information and irrelevant information in a Web page, the irrelevant information brings some negative influence to their classification, storage and retrieve. In order to reduce the influence, aiming at theme-related Web pages, this paper proposes a new method to extract the content of Web pages based on their text and structural features. It removes those unrelated tags in the Web page by regular expressions, and segments the Web into blocks according to Web pages structure and the text information. By analyzing the text blocks and link blocks of the Web, it only retains the main content of the page; those noisy parts are deleted from the page. Experimental result shows that the method is feasible and of high accuracy in page cleaning and content extraction.
  • XU Qin, LUO Bin
    Computer Engineering. 2013, 39(12): 204-210. https://doi.org/10.3969/j.issn.1000-3428.2013.12.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Given an inappropriate set of initial clustering centroids, K-means algorithm can get trapped in a local minimum. To remedy this, this paper proposes a K-means clustering algorithm combined with adaptive mean-shift and Minimum Spanning Tree(MST). The original data set is projected into Principal Component Analysis(PCA) subspace. An adaptive Mean-shift is proposed and run in the PCA subspace to let the data move to dense regions, and via the MST and graph connected component algorithm, it finds the number of clusters and the cluster indicators. According to the indicators, the density peaks are computed in the full space and taken as the initial centroids for K-means clustering. Experimental results show that the proposed algorithm can provide better global solution and higher clustering accuracy within a shorter period of execution time.
  • SUN Jin-guang, MA Zhi-fang, MENG Xiang-fu
    Computer Engineering. 2013, 39(12): 211-215. https://doi.org/10.3969/j.issn.1000-3428.2013.00.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the era of big data, how to obtain valid information from the Web becomes a keen topic for business, government, and research workers. User’s opinion mining becomes a research topic for the area of Natural Language Processing(NLP) and text mining. However, due to the inherent fuzziness and randomness of language, as well as the traditional term weight value calculation method is not suitable for the sentiment word and other factors, the text sentiment classification accuracy is difficult to achieve the performance of traditional text subject classification. To solve these problems, this paper proposes a sentiment classification method based on sentiment word attributes and cloud model. It calculates weight of sentiment words by combining attributes and syntactic structure of sentiment words, and converts qualitative and quantitative of sentiment words based on cloud model. Experimental results show that this method to calculate weights of sentiment words is valid, and the recall rate is up to 78.8%. Text sentiment classification results are more accurate than that based on dictionary, the correction rate is up to 68.4%, and the accuracy is increased by about 9%.
  • WANG Hao, CAO Jian
    Computer Engineering. 2013, 39(12): 216-222. https://doi.org/10.3969/j.issn.1000-3428.2013.12.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the distributed environment, existing Agent coalition formation algorithm can not solve the problem of task flow with logical interdependent relationships and transfer costs. In order to solve this problem, it uses Agent negotiation to build union. During the negotiation process, two roles are set up: publisher Agent and participant Agent, and two corresponding algorithms are proposed for each of them: focusing on cost information adjustment and high-profit tasks competition respectively. It is innovatively allowed that participant Agent discloses some personal cost information to publisher Agent to compete for tasks in a controlled way. By leaking their own cost information to publisher Agent for profitable task, it forms the optimal coalition structure after several rounds of negotiations. Experimental results show that under the labor-based profit distribution mode of coalition total revenue, the information disclosure mechanism is faster in forming coalitions and increases coalition net profit and Agents’ average profit rate compared with the traditional information non- disclosure mechanism.
  • PENG Yan-fei, SHANG Yong-gang
    Computer Engineering. 2013, 39(12): 223-227,232. https://doi.org/10.3969/j.issn.1000-3428.2013.12.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the content-based image retrieval, aiming to the problem that the classification performance of Support Vector Machine (SVM) not only is affected by the sample imbalance, but also the visual diversity of images causes positive samples can not be found near the classification hyperplane, and the classification performance can not be improved, this paper proposes an offset method of two stage SVM hyperplane. According to the sample imbalance, the method moves the hyperplane to theoretical optimal hyperplane, and does relevant feedback based on this hyperplane, and according to the result of feedback, it utilizes the three principles of hyperplane to offset the current hyperplane and solves the visual diversity problem, so the better classification hyperplane can be got which has better retrieval precision. Experimental results show that compared with the standard SVM image retrieval method, the method can greatly improve the classification performance of the samples, and has an average of 16% of the performance improvement on retrieval accuracy of image retrieval.
  • MA Yue, WANG Xiao-tong, XU Xiao-gang
    Computer Engineering. 2013, 39(12): 228-232. https://doi.org/10.3969/j.issn.1000-3428.2013.12.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most current scene matching algorithms have problems such as time-consuming and low matching rate. In order to solve the problems, a new scene matching algorithm based on improved Local Binary Pattern(LBP) descriptor and pseudo-Zernike is proposed. It uses pseudo-Zernike to extract information about direction and scale, and texture information is abstracted by the improved LBP descriptor. Principal component analysis is applied to scale and direction information, and binarization is put into use on principal information. Experiments turn out that the computational complexity about the algorithm is lower than the other two typical algorithms, and CPU elapsed time is 0.05 s, according with instantaneity. Matching pictures in various circumstance are also provided and compared. The algorithm’s matching rate is 100% under standard conditions, 64.52% under rotational variations, and 53.84% under brightness variations. It has the highest matching rate, and the matching rate of the algorithm is basically flat with the other algorithms.
  • CHEN Xiao-jing, JING Zhong-liang, ZHANG Jun
    Computer Engineering. 2013, 39(12): 233-236,241. https://doi.org/10.3969/j.issn.1000-3428.2013.12.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the early development of airborne meteorological radar, numerical weather is a very important part, while the accuracy of wind field directly affects the accuracy of the entire numerical weather model. Aiming at sloving the motion analysis of successive images in numerical weather model, low-dimensional fluid motion estimation method is used, and it is based on the Helmholtz decomposition of motion field. Through the deformation of a small number of vortex and source particals, it can get the low-dimensional parametric expression of optical flow field. The optical flow field consists of linear combinations of irrotational and solenoidal basis functions, which are based on Green kernel gradient. The coefficient values and the basis function parameters are obtained by minimization of a function. The experimental results show that compared with the traditional optical flow method, this method is nearly 4 times faster, and wind field more accurately reflects the actual weather conditions. The method is more reliable in wind field reconstruction.
  • WEI Li-fang, LIN Jia-xiang
    Computer Engineering. 2013, 39(12): 237-241. https://doi.org/10.3969/j.issn.1000-3428.2013.12.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The improved method based on Speeded-up Robust Feature(SURF) is proposed to ensure the accuracy and reduce time loss of the fundus image registration. The method is based on SURF feature extraction. The BBF algorithm is used to match feature point, and the keypoints’ orientations and the geometrical size of matches are used to exclude the incorrect matches. The hierachical estimator and model selection are combined to calculate the transformation parameter. And the registration correction is done to yield better transformation parameter. Experimental results show that the mean square error value of registration accuracy is less than 1, and can meet the accuracy requirements and improve the processing speed.
  • ZHAO Kun, JI Qi-chun, LI Ling-yan
    Computer Engineering. 2013, 39(12): 242-246,254. https://doi.org/10.3969/j.issn.1000-3428.2013.12.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at robot solving maze problem in the unknown environment, a path planning algorithm based on dynamic discretely potential is proposed. In order to improve the performance of path optimization, the algorithm uses grids method which including boundary node to model, and by accumulating the values of connected nodes which mean the grid’s status of obstacles to build potential field. Furthermore, to increase search rate, the algorithm dynamically changes potential field as the environment information updated, and obtains the direction which potential values fall to guide the robot moving toward target. The convergence of path is judged by its visit status to avoid extension of useless grids. Simulation experimtental results show that the robot can find out smoothing optimal path quickly and efficiently by using the algorithm in the complex and unknown maze environment.
  • XING Chang-zheng, HU Quan-bo
    Computer Engineering. 2013, 39(12): 247-250,259. https://doi.org/10.3969/j.issn.1000-3428.2013.12.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The skew distribution characteristics of data stream clustering algorithm TDCA lack of clustering speed and memory utilization. Variable flow rate data stream environment has a serious impact on the quality of the clustering results. In order to deal with the above problems, a data stream clustering algorithm named GR-Stream is presented. It uses grid cells as the aggregation of data points, Based on an extension of the R-tree structure as the organization of grid cell index structure, it introduces pruning strategy on the basis of this structure, and adjusts the way of data points into the tree. It adopts the real dataset the KDD-CUP99 on algorithm test. Experimental results show that, compared with the TDCA algorithm data structure organizing data, this index structure can improve the clustering speed by 40%, and the application of pruning strategy to save at least half memory usage, at the same time maintaining more than 90% of the average purity of the clustering results in the variable flow rate of the data stream environment.
  • ZHANG Li-cai, YU Tian-tian, WANG Min, YE De-kun
    Computer Engineering. 2013, 39(12): 251-254. https://doi.org/10.3969/j.issn.1000-3428.2013.12.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to achieve lighting energy saving of highway tunnel, the method of speed and position evaluation based on cloud model is proposed. It uses positive cloud with the combination of reverse cloud algorithm, processes the pulse waveform data when the vehicle goes through coil to obtain the estimation of average speed. It keeps the speed estimation results and time as the decision basis of vehicle’s position, and adopts cloud reasoning estimate vehicle’s position. It makes cloud estimation error real-time correct, so as to judge the position of vehicle accurately. The simulation and experimental results show that the algorithm is able to realize the estimation of vehicle’s position inside the highway tunnel, and the accuracy of which can reach 99.230 9%, so as to improve efficiency in practical application of the cloud model algorithm, suitable for the energy saving control of lighting system in highway tunnel.
  • ZHANG Jin, HUANG Qing-shan, ZHAO Wen-dong, PENG Lai-xian
    Computer Engineering. 2013, 39(12): 255-259. https://doi.org/10.3969/j.issn.1000-3428.2013.12.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The Symmetrical Primary Standby Switching(SPSS) mechanism employed by existing flow traffic measurement algorithms can not make full use of memory space. An Asymmetrical Primary Standby Switching(APSS) mechanism is presented for improving the space efficiency of data flow measurement algorithms. The APSS mechanism is based on the observations that flow arrival process is stable, and DRAM can support bulk write which is much faster than random access. A small standby memory is enough to realize the primary-standby mechanism. Experimental results show that compared with SPSS, APSS can reduce memory consumption by almost a half while having trivial impact on the measurement error probability.
  • REN Hai-ke, HU Yin-feng
    Computer Engineering. 2013, 39(12): 260-263,268. https://doi.org/10.3969/j.issn.1000-3428.2013.12.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the video packet loss recovery rate of the multimedia system, based on traditional Cauchy RS coding and diagonal interleaving, this paper proposes an optimization algorithm by second Cauchy RS coding for multi-frame video data packets. The algorithm achieves twice Cauchy RS coding with one matrix operation by dynamically selecting the appropriate Cauchy matrix, and deforms the diagonal interleaving to adapt to the characteristics of the video data packets. Experimental result indicates that compared with the traditional Cauchy RS coding algorithm, the optimization algorithm can significantly improve the packet loss recovery rate, in the case of similar decoding performance, the decoder delay and the number of parity packets.
  • HONG Qi, ZHAO Zhi-wei, HE Min
    Computer Engineering. 2013, 39(12): 264-268. https://doi.org/10.3969/j.issn.1000-3428.2013.12.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Low latency, high throughput and small area are three major design considerations of a design based on FPGA. In view of the above factors, this paper puts forward different cardinal SRT float-point division and square root algorithm. It designs three implementations of float-point division and square root operations with variable width based on Virtex-II pro FPGA. One is a low cost iterative implementation, another is a low latency array implementation, and the third is a high throughput pipelined implementation. Experimental results show that the highest frequencies for the float-point division and square root algorithm can reach above 180 MHz and 200 MHz respectively with meeting the needs of synthesized area. It fully verifies the effectiveness of the implementation plan.
  • DAI Tian-zhe, QIU Ci-yun, REN Min-hua
    Computer Engineering. 2013, 39(12): 269-272,276. https://doi.org/10.3969/j.issn.1000-3428.2013.12.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In digital communication systems, Decision Feedback Equalizer(DFE) is usually used to track time-varying response of channel, adaptively adjust filter tap coefficient and remove Inter Symbol Interference(ISI), caused by channel attenuation and noise. Aiming at the problem of difficulty of ascertaining the length of equalizer, this paper uses the method of theoretical analysis to analyze the structure of DFE deeply and research adaptive decision feedback equalization algorithm based on optimal estimation theory and length of filter coefficient influence on Mean Square Error(MSE). On this basis, Varying Threshold-Dynamic Length Algorithm(VT-DLA) is proposed for finding tradeoff between minimum mean square error and optimal filter length. Matlab analysis and simulation result shows that the length of equalizer can converge to 30 taps approximately under the serious channel attenuation and noise, it can track channel response and converge to optimal filter length in some instantaneous accumulative MSE.
  • LI Chao-bei, LI Chuan-dong, ZHANG Jin-cheng
    Computer Engineering. 2013, 39(12): 273-276. https://doi.org/10.3969/j.issn.1000-3428.2013.12.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Through using the mathematical model of the charge-controlled memcapacitor, a memcapacitor bridge circuit consisting of four identical memcapacatiors is proposed that is able to perform zero, negative, and positive synaptic weightings. And together with three additional transistors, the memcapacitor bridge weight circuit is able to perform synaptic operation for neural cells. It is power efficient, since the operation is based on pulsed input signals. Synaptic weight programming and synaptic weight multiplication processing is performed by using the Matlab. Simulation results show that the performance of synapses circuit based on linear-memcapacitor bridge is almost equal as the memristor bridge synapses circuit, and is better than the traditional multiplication circuit.
  • LIU Jie, LIU Zhen, HUANG Peng
    Computer Engineering. 2013, 39(12): 277-279,284. https://doi.org/10.3969/j.issn.1000-3428.2013.12.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Virtual human has important application value in e-business. The current trading model is lack of consideration on emotion parameters. An emotion model of virtual human in trading process is set up. The model integrates the personality and emotion of a virtual human. A probability function is introduced to express information uncertainty in a trading, and an emotion formula is built. The simulation experimental result shows that the emotional transaction model can reflect the reality of trading rules, and better simulate the actual transaction process.
  • WAN Hua, ZHOU Fan, HU Yin-feng
    Computer Engineering. 2013, 39(12): 280-284. https://doi.org/10.3969/j.issn.1000-3428.2013.12.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the large number of front-end signals and the huge amount of beamforming calculation of underwater real-time 3D sonar imaging systems, a real-time imaging system for underwater 3D scene, which is based on Field Programmable Gate Array(FPGA), is proposed and designed. FPGA array controls multiple signal synchronization sampling. Beamforming algorithm is optimized for parallel processing of massive data. Embedded processor PowerPC is used for system management. Real-time 3D images are displayed on host PC. Experimental result shows that the system is able to form 3D images with a resolution of 2 cm within the range of 200 m underwater and the 3D image refresh rates up to 20 frames per second.
  • JI Ming-yu, WANG Hai-tao, CHEN Zhi-yuan
    Computer Engineering. 2013, 39(12): 285-289. https://doi.org/10.3969/j.issn.1000-3428.2013.12.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the demand of property verification for complex information system with random nature, this paper presents a kind of stratified until formula properties verification and analysis method acting on discrete probability rewards model. A new more expressive probabilistic computation tree logic both with transition reward interval and transition step interval expression ability is used to describe stratified until formula properties of system model on the basis of all kinds of discrete stochastic logic variants. By using automaton to express logic path formula, the corresponding state probability satisfaction algorithm is described based on product model which realizes the simultaneous evolution of the model and the automaton. The example result verifies the feasibility and validity of the method.
  • HUANG Peng, LIU Zhen
    Computer Engineering. 2013, 39(12): 290-293. https://doi.org/10.3969/j.issn.1000-3428.2013.12.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The simulation research on autonomous crowd behavior is a challenging subject, and how to build a virtual human with credible behavior is the key technology. Based on Maslow’s motivation theorem, a behavior model of autonomous crowd is set up. The model can integrate stimuli, motivation and behavior together with productive rulers. A Finite State Machine(FSM) is introduced to express the relation between stimuli and motivation. A virtual human can perceive a stimulus on virtual vision, and the local collision among virtual human is controlled by repulsive force. A prototype system is realized on PC, and the result shows that the model can closely simulate autonomous crowd behavior.
  • WU Fu-xiang, DONG Jian-kang, ZHOU Fu-gen
    Computer Engineering. 2013, 39(12): 294-297. https://doi.org/10.3969/j.issn.1000-3428.2013.12.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The indirect lighting calculations based on the ray-cast is a time-consuming process, therefore, the number of samples is limited in interactive program. Considered of spatial correlation, cone-ray cast algorithm is proposed to alleviate insufficient samples. It employs conical boundary to pre-exclude scene’s elements, and implements in GPU as stack-less algorithm, additionally, optimizes storage location of the data to achieve the wider bandwidth, which is implemented by employ OpenGL and OpenCL. The result shows that the algorithm can efficiently compute indirect illumination, and brings about two-fold performance increase.
  • XU Hong-yun, CHEN Zhi-feng
    Computer Engineering. 2013, 39(12): 298-302. https://doi.org/10.3969/j.issn.1000-3428.2013.12.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Due to the existence of Programmable Logic Device(PLD) security vulnerability and its great harm to electronic equipment, use visualization as the assistive technology for PLD security vulnerability detection method. The layout of state transition diagram is the key problem. Aiming at the deficiency of the layout of state transition diagram, such as node overlay and disheveled distribution of nodes, etc, an improved visualization layout algorithm IGVA is proposed. The algorithm uses heuristic method to compute the attractive and repulsive forces in different stages, decreases the attractive forces between nodes to avoid node overlay at early iterations, decreases the repulsive forces of edges to optimize the distribution of nodes at final iterations, which reduces the space using by the graph. Experimental results show that IGVA solves the node overlay problem and achieves the layout goal of diagram.
  • RUAN Shun-ling, LU Cai-wu
    Computer Engineering. 2013, 39(12): 303-307. https://doi.org/10.3969/j.issn.1000-3428.2013.12.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The cooling and power devices energy consumption is large and the waste is serious in virtualized data center, but the researches on energy consumption optimization only consider the IT devices. Through the study of the energy consumption logic, an energy consumption optimization scheduling method is proposed for virtualized data center. The method generates dynamic scheduling strategies according to virtual scheduling rules through analyzing the load and heat distribution in data center. Through reducing the redundancy cooling and the devices no-load power wastage in data center, the energy consumption is minimized. Experimental results show that the scheduling method can save redundancy cooling by nearly 26% and improve the power supply efficiency by nearly 8%, which improves the power usage effectiveness and reduces the energy consumption of data center.
  • PENG Li, YANG Heng-fu
    Computer Engineering. 2013, 39(12): 308-315. https://doi.org/10.3969/j.issn.1000-3428.2013.12.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to decide ABox consistency for Description Logic(DL) SHIQ, a Tableau algorithm is presented. Given a TBox T, an ABox A and a role hierarchy H, the algorithm first converts A into a standard ABox A’ by pre-disposal, and then applies a set of Tableau rules to A’ according to specific completion strategies, thus A’ is extended continually, until it is extended to a complete ABox A’’. A is consistent with T and H, if and only if the algorithm can yield a complete and clash-free ABox A’’. The blocking mechanism adopted by the algorithm can prevent Tableau rules’ unlimited execution, and avoid redundant rule application. By proving Tableau rules’ execution times is limited, the algorithm’s termination is ensured. By proving Tableau rules’ excecution is unlikely to destroy the consistency between A’ and H, the algorithm’s soundness is ensured. By proving an explanation which satisfies A, T and H can be constructed in terms of A’’, the algorithm’s completeness is ensured.
  • LEI Jian-mei, BAI Yun, CHEN Min, FENG Yu-ming, LIU Jie, GAO Yang-chun
    Computer Engineering. 2013, 39(12): 316-320. https://doi.org/10.3969/j.issn.1000-3428.2013.12.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Whole vehicle model is the basic platform for vehicle electromagnetic performance simulation and analysis. The model precision directly defines the accuracy and reliability of the simulation results. In order to get a quantization result of vehicle electromagnetic model assessment, an assessment mechanism is put forward. It uses the model error to conversely calculate the model precision. Model error is defined as weighted sum of global error and part error. Global error is defined as weighted sum of global dimension and mean area of part surfaces. Part error is defined as weighted sum of maximum, mean and coefficient variation of part deviations. All the weighting coefficients are adjustable according to the designer’s experience and the target usage of the model. This kind of flexibility makes the assessment mechanism applicable to vehicle electromagnetic models for most electromagnetic simulation applications. As a validation, gain pattern simulation results show very good consistency with the model precision calculated on the basis of the assessment mechanism propounded.