Author Login Editor-in-Chief Peer Review Editor Work Office Work

05 October 2010, Volume 36 Issue 19
    

  • Select all
    |
    Networks and Communications
  • QIN Cheng-Gang, XU Dong, TUN Wen-Jiang, DING Mo-Fu, HU Yi
    Computer Engineering. 2010, 36(19): 1-4. https://doi.org/10.3969/j.issn.1000-3428.2010.19.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a dynamic feedback real-time scheduling model based on Lebesgue sampling and elastic scheduling algorithm. The workload of soft real-time system can be held below the reference value by adjusting the task rate. An interrupt can be triggered while system is overload, and the scheduling model can be regarded as an event-based system. The mechanism is realized by a watch dog. The scheduling model is realized in the RTAI real-time system, and the model’s dynamic characteristics and steady state characteristics are tested. Experimental results of test show the model can reduce the workload of task scheduling, while the system is steady.
  • SHI Bai-Ying, YANG Xiao-Guang, SHU Tong
    Computer Engineering. 2010, 36(19): 5-7. https://doi.org/10.3969/j.issn.1000-3428.2010.19.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of the local intelligence, this paper proposes a scheme by combination of embedded platform and signal controller. The embedded hardware is designed into core board and expanded board, and the software modules of the embedded platform, including the application program modules, development and transplantation of the embedded operating system and files system, are introduced. The realization and test of this system are implemented and the test result shows the computation time meets the demand for real-time traffic control.
  • LIU Zhi-Xiong, YANG Guang-Xiang
    Computer Engineering. 2010, 36(19): 8-10. https://doi.org/10.3969/j.issn.1000-3428.2010.19.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a two-dimension encoding approach based on the job operation sequence and uses Evolutionary Strategy(ES) algorithm to solve the Job-shop Scheduling Problem(JSP). A kind of recombination operation based on three-point crossover and interchange is used to generate the offspring individuals, and a kind of mutation operation is designed that some gene in the encoding is stochastically generated. Experimental results show that evolutionary strategy algorithm can effectively optimize the JSP and has better performance than genetic algorithm and hybrid particle swarm optimization algorithm. ES algorithm based on three-point crossover and interchange recombination operation has better performance than the recombination operation based on two-point crossover and interchange and the recombination operation based on four-point crossover and interchange.
  • CHEN Chao, LUO Mo-Meng, YAN Bao-Beng
    Computer Engineering. 2010, 36(19): 11-13. https://doi.org/10.3969/j.issn.1000-3428.2010.19.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper proposes a hierarchical IPv6 real-name address space as network identity. It builds an IPv6 real-name address resource PKI according to current IP address allocation structure for allocation, management and authentication work. It designs and implements a trustworthy communication system supported by third party’s IPv6 real-name address resource PKI. Experimental result shows that the mechanism can achieve the goal of real-name communication and privacy protection.

  • XU Min, HE Zheng-You, JIAN Qing-Quan
    Computer Engineering. 2010, 36(19): 14-17. https://doi.org/10.3969/j.issn.1000-3428.2010.19.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For dealing with the large scale characteristic of complex monitor system as well as redundant structures of critical components, a hybrid method of reliability analysis for monitor system is presented on basis of dynamic fault tree and in combination with Monte Carlo simulation algorithm. Dynamic Fault Tree(DFT) is used to establish the reliability model of monitor systems. Reliability indices can be obtained by Monte Carlo method, which is used to solve the reliability model. A special reliability analysis case of the subway station-level monitor system is proposed, it demonstrates the feasibility of the model and the effectiveness of the algorithm.
  • TAO Can-Zhong, YANG Jian-Mei
    Computer Engineering. 2010, 36(19): 18-20. https://doi.org/10.3969/j.issn.1000-3428.2010.19.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper discusses the influence of Scale-Free Like(SFL), GLOBAL, CYCLE, ER and STAR on optimization effect of Particle Swarm Optimization(PSO). Analysis and experimental results show that PSO performs better based on Scale-Free network neighborhood topology than on other neighborhood topologies such as regular network, random network, star network and traditional PSO. A new approach considering Scale-Free network neighborhood topology may be suggested to improve the performance of PSO near the optima and its convergence speed. And mean degree of network has influence on optimization effect of PSO .
  • WANG Rui-Rui, MA Jian-Wen, CHEN Xue
    Computer Engineering. 2010, 36(19): 21-23. https://doi.org/10.3969/j.issn.1000-3428.2010.19.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The imaging model, acquired angle and resolution between image acquired by diverse remote sensing sensors are different, which throws difficulty in the registration between them. Aiming at this problem, the normalized SIFT algorithm is proposed in this paper. The SIFT descriptors are normalized, which can reduce the impact of hue difference between image acquired by different sensors, and then combined with least-square equation and bilinear interpolation method, the automatic registration is achieved. Two groups of images, which have big differences in acquired angle and resolution, SPOT and ASTER, ASTER and TM, are tested. Results dedicate that this algorithm is robust and has a high accuracy.
  • DIAO Chuan-Shen, WANG Ru-Chuan, JI Yi-Mu
    Computer Engineering. 2010, 36(19): 24-26. https://doi.org/10.3969/j.issn.1000-3428.2010.19.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    It is difficult for PSO to detect dynamic change of environment and response in optimizing process. Aiming at the problems, by adding particles which are on the periphery for detecting the change of environment, this paper proposes a new diffuse population function to respond change, and designs an algorithm named Diffuse Particle Swarm Optimization(DPSO). Comparison with APSO and CPSO, it can detect changes of environment more effectively and track with optimum solution faster.
  • HU Jia, FENG Zhi-Yong, XU Chao, WANG Hui
    Computer Engineering. 2010, 36(19): 27-30. https://doi.org/10.3969/j.issn.1000-3428.2010.19.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a consistence verification approach of semantic Web Service process based on Petri net. It converts the Web Service process into Petri net model, obtains the parallelizable service pairs by analyzing reach ability graph of the Petri net model, and determines whether the conflicts exist between any two parallelizable services according to domain ontology and the semantic functionality of both services. It computes the possible states before the execution of each service, on the basis of which it decides the enforceability of each service. Correctness and validity of the approach is verified through a practical case.
  • DAO Jun, ZHANG Xia
    Computer Engineering. 2010, 36(19): 31-33. https://doi.org/10.3969/j.issn.1000-3428.2010.19.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve orientation accuracy of the light-spot pixel position in fiber sensing measurement system based on Charge-Coupled Device(CCD), this paper proposes a new algorithm based on traditional center of gravity algorithm, which adopts weight nonlinearity and linear interpolation. Orientation precision of the two algorithms are compared under different noise levels. Simulation result shows that the new algorithm can reach 0.05 pixel of orientation precision, and it is more precise and stable.
  • CHEN Xiang, TUN Ti
    Computer Engineering. 2010, 36(19): 34-36. https://doi.org/10.3969/j.issn.1000-3428.2010.19.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional association rule mining algorithm has low efficiency and it has a mount of redundant in mining results. Aiming at this problem, this paper presents an association rule mining algorithm based on base set and concept lattice. It replaces the original database with the base set which has seed item distribution range, and builds concept lattice to find association rules. Experimental results show that this algorithm has much superior to Base and Apriori algorithm on the performance of time efficiency.
  • CI Guan-Na, HU Jing
    Computer Engineering. 2010, 36(19): 37-38. https://doi.org/10.3969/j.issn.1000-3428.2010.19.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to realize the performance evaluation for Extensible Markup Language(XML) database, this paper presents a XML database test scheme based on TPC-C test benchmark. Aiming at the trait of XML database, it specializes in data structure and query event sentence. The original nine tables are mapped to five new XML Schema documents, it rewrites the workload according to the SQL/XML standard and uses the scheme for testing a SQL Server 2005 database. Results show that the characteristics of various affairs equal to TPC-C test benchmark.

  • SHU Hao-Dong, ZHONG Yong
    Computer Engineering. 2010, 36(19): 39-41. https://doi.org/10.3969/j.issn.1000-3428.2010.19.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Feature subset which is selected by traditional feature selection method has many redundant features and it is not representative. Aiming at this problem, this paper presents a feature selection method based on Rough Set(RS) and pansystems equivalence operator. The method uses the document frequency based on minimum word frequency to extract original features, uses pansystems equivalence operator to expand RS and gives an attribute reduction algorithm to eliminate redundancy. It can acquire more representative feature subset. Experimental results show that the method has higher precision and recall.
  • LIU Jun, TAO Tian-Fang
    Computer Engineering. 2010, 36(19): 42-43. https://doi.org/10.3969/j.issn.1000-3428.2010.19.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    n order to compute semantic relevancy for the specific domain knowledge in opinion mining, this paper proposes a semantic relevancy computing method based on Wikipedia. On the basis of constructing a category tree from Wikipedia, it represents the vast words in Wikipedia by using the category and the result in a Wikipedia dictionary which contains rich domain-specific knowledge, and then computes semantic relevancy by using the dictionary. Experimental results show Spearman rank correlation coefficient of this method can reach 0.77.

  • XU Shi-Wen, AI Jun, ZHANG Yi-Fu
    Computer Engineering. 2010, 36(19): 44-46. https://doi.org/10.3969/j.issn.1000-3428.2010.19.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a multi-Agent cooperation framework for software test data generation based on the problems of test data generation methods extension and low level of intelligence. The framework is constituted by information extraction Agent group and test data generation Agent group. The software test data is generated through the cooperation of the Agents. This framework makes full use of the characters of Agent, like scalable, flexible and autonomous. Based on the theory proposed in this paper, a software prototype is developed. It is proved that this framework is feasible.
  • WANG Lei, ZHANG Yun-Quan, LIU Fang-Fang, ZHANG Xian-Die
    Computer Engineering. 2010, 36(19): 47-49. https://doi.org/10.3969/j.issn.1000-3428.2010.19.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper improves a High Performance Linpack(HPL) software package by using mixed precision algorithm to solve linear equations set. The performance test which including performance and speedup, the number and time of iteration and error analysis for original HPL and improved HPL software package are conducted on the platform of four AMD Opteron870 dual-core processors. Experimental results show the computing performance of improved software package enhances almost twice compared with the original HPL while keeping double floating point precision and it also has good scalability.
  • LIU Zhuo-Yang
    Computer Engineering. 2010, 36(19): 50-52. https://doi.org/10.3969/j.issn.1000-3428.2010.19.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper makes an improvement on the implementation of exiting Complex Event Processing(CEP) engine according to the present advantage and disadvantage. Considering some key points including Ad hoc query and Time-Window, it adopts pre-processing and dispatch module. It implements an improvement edition(SPSA) of STEAM system. Experimental results prove that the solution above is constructive. It can improve the efficiency of the engine and decrease the pressure of the system under some condition.
  • BO Shi-Chi, DU Gong-Wei
    Computer Engineering. 2010, 36(19): 53-55. https://doi.org/10.3969/j.issn.1000-3428.2010.19.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a technology named SoftSIMD is introduced to achieve the speedup of parallel computing between high and low subwords of the register in the processors that do not have the Single Instruction Multiple Data(SIMD) extensions. The technologies include addition and subtraction operations, multiplication and dot product operations. On the basis of the recent research, it discusses the negative operands in the dot product operations originally, and applies the SoftSIMD technology to the complex operations, which make it more adapted to the applications in the field of Digital Signal Processing(DSP).
  • FENG Shao-Rong, ZHANG Dong-Zhan
    Computer Engineering. 2010, 36(19): 56-58. https://doi.org/10.3969/j.issn.1000-3428.2010.19.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to overcome the shortcomings of the DBDC, a distributed clustering based on centers and density which called DCUCD is proposed. It works based on the centers and the density. The virtual core objects are generated from the distributed data and the quality is better if the algorithm runs more times. Clustering is the same as the process to classify all of the core objects. Theoretical analysis and experimental results testify that DCUCD can effectively deal with the problem of local noise, and discover clusters of arbitrary shape. It can generate high quality clusters and cost a little time.
  • SHEN Yong-Bei, CA Mian, HU Dun, TIAN Jian-Sheng
    Computer Engineering. 2010, 36(19): 59-61. https://doi.org/10.3969/j.issn.1000-3428.2010.19.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to system dependencies between packages, this paper proposes a label algorithm and makes security marking rules to classify software packages. An application level-type structure is formed. Based on this structure, it uses the mandatory access control mechanism and trust computing technologies to implement the isolation between the application level, to reduce or eliminate unintended interference between applications, intended for providing the user a safe and credible environment.
  • TUN Hai-Chao, TANG Zhen-Min
    Computer Engineering. 2010, 36(19): 62-64. https://doi.org/10.3969/j.issn.1000-3428.2010.19.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Dewey code is a kind of important coding scheme of XML document, it is an important preprocessing step in XML keyword search. This paper proposes two algorithms for Dewey code of XML document, that is the recursive algorithm based on DOM and the event generation algorithm based on SAX, and compares the running time, memory consumption of two algorithms. Experimental result shows that the SAX event generation algorithm is with higher speed and lower memory consumption for very large XML document.
  • YU Chao, MOU Guo-Qiang
    Computer Engineering. 2010, 36(19): 65-66. https://doi.org/10.3969/j.issn.1000-3428.2010.19.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A method of verifying data structure’s property is introduced, which is based on automated test generation. It combines the traditional model checking and the automated software test generation, and searches the state space orderly, while ensuring the soundness of generating states as well as the completeness of the coverage of state space. At the same time, the method of describing property is the program logic, which is familiar by programmers and easy to use.
  • TAN Xi-Gong, CHEN Chi-Beng, OU Yang-Jing-Cheng, LIN E-Beng
    Computer Engineering. 2010, 36(19): 67-69. https://doi.org/10.3969/j.issn.1000-3428.2010.19.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the challenges of duplicated retrieval result in P2P networks, a mechanism to merge the retrieved results is proposed. A method of detecting duplicated results and a strategy of selecting download peers for duplicated results based on download data quantity and response time are presented. The strategy can efficiently reduce retrieval overlap, network transfers quantity for downloading results and response time for acquiring results. Experimental results show that the method is efficient.
  • XU Bing-Xia, GU Jing-Fan
    Computer Engineering. 2010, 36(19): 70-71. https://doi.org/10.3969/j.issn.1000-3428.2010.19.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Now many redundant test cases occur in coverage test. This paper introduces an improved coverage test tool based on global domination graph algorithm, the algorithm is an improved algorithm through adding the annotated looping tree and finding nearest common node. It can compute the minimum test case set which covers the source code. Experimental result shows that the algorithm can decrease the analysis time of coverage result file, generate small test case, and achieves high coverage rate.
  • LI Jin-Cheng, WU Xiu-Chuan, HU Huan-Huan
    Computer Engineering. 2010, 36(19): 72-74. https://doi.org/10.3969/j.issn.1000-3428.2010.19.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Resource discovery is a key problem in grid environments. Most of existing resource discovery mechanisms still can not effectively provide categorizing mechanism of resources. On the basis of P2P grid, a kind of resource discovery mechanism based on Distributed Resource Tree(DRT) of multi-attribute queries is proposed. This distributed resource tree organizes and classifies resources by Primary Attributes(PA). It can make the load of nodes on P2P grid be balanced dynamically.
  • HU Jiang-Meng, LI Jian-Hua, DU Zhang-Hua, WEI Feng
    Computer Engineering. 2010, 36(19): 75-77. https://doi.org/10.3969/j.issn.1000-3428.2010.19.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There are two main problems about the existing labeling mechanism: low space-time efficiency and insufficient supporting for dynamic update of XML document. Combining with the prime and IBSL labeling mechanism, this paper presents a novel XML tree labeling called Prime-based Binary String Labeling(PBSL), which has high efficiency in query, can support update operation for XML document, and greatly reduce the storage space.
  • SHANG Xiao-Chun, LI Hong-Hua
    Computer Engineering. 2010, 36(19): 78-80. https://doi.org/10.3969/j.issn.1000-3428.2010.19.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In distributed system, computing job flow is unable to be dynamically adjusted after being mapped to the nodes, which makes the essential jobs not be executed because of the inter job waiting. Aiming at the problem, this paper proposes a scheduling algorithm for computing job flow. After the jobs being mapped to distributed nodes, this algorithm gets the order values according to their dependence relations, dynamically adjusts the priority based on order values to make the essential jobs finished as soon as possible and reduce the inter job waiting, which can greatly shorten the execution time of computing job flow. The application in actual system indicates that this algorithm has strong superiority in fast execution of large number of computing job flow in job management system.
  • DAN Jing, LI Mo-Long
    Computer Engineering. 2010, 36(19): 81-83. https://doi.org/10.3969/j.issn.1000-3428.2010.19.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Latent Dirichlet Allocation(LDA) is presented to express the distributed probability of words. The topic keywords are extracted according to Shannon information. Words which are not distinctly in the analyzed text can be included to express the topics with the help of word clustering of background and topic words association. The topic meaning is attempted to dig out. Fast Gibbs is used to estimate the parameters. Experiments show that Fast Gibbs is 5 times faster than Gibbs and the precision is satisfactory, which shows the approach is efficient.
  • LI Wen-Gao, WANG Hai-Xiang
    Computer Engineering. 2010, 36(19): 84-86. https://doi.org/10.3969/j.issn.1000-3428.2010.19.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the characteristics of transaction-intensive workflow system, this paper brings forward an algorithm named Pre-Calculated Scheduling Algorithm(PCSA) working in transaction-intensive environment. In this algorithm, every workflow application is pre-calculated to create PRI lists for solution generating. In this way, the algorithm can guarantee the minimum execution cost and transfer cost. Experimental results illustrate that the algorithm has preferable efficiency.
  • FANG Gang, YING Hong, XIONG Jiang, TUN Yuan-Bin
    Computer Engineering. 2010, 36(19): 87-89. https://doi.org/10.3969/j.issn.1000-3428.2010.19.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of presented algorithms inefficient extract spatial topology association, this paper proposes a mining algorithm of alternately search spatial topology association, which is suitable for mining spatial topology association rules in large data. From two endpoints of candidate digit interval, the algorithm uses two methods of generating candidate frequent itemsets in digit ascending and digit descending, in order to alternately search spatial topology association rules. The algorithm also uses digit feature to reduce the number of scanned transaction when computing support of itemsets, mining efficiency is improved. Experimental result shows that the algorithm is faster and more efficient than presented algorithms when mining spatial topology association rules in spatial data.

  • TUN Xi-Hua, ZHONG Cheng, MO Yang-Gong, TANG Jin-Hui
    Computer Engineering. 2010, 36(19): 90-92. https://doi.org/10.3969/j.issn.1000-3428.2010.19.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The definition-use pair for each instance variable and the post conditions and preconditions for each member function are computed by the class data flow analysis. The class to be tested is preprocessed by the inheritance and the polymorphism of the class. The search environment of Java PathFinder(JPF) is configured, and the problem of generating test case is transformed into the one of finding the counter example in model check. A method to generate the class test case in parallel with multi-threading is presented on multi-core computers. Experimental results show that the algorithm can remarkably reduce the required states and run time of the test case generation, and spends the less time with more processing cores used.
  • ZHANG Ying-Ying, XIE Jiang, DING Qiu-Lin
    Computer Engineering. 2010, 36(19): 93-95. https://doi.org/10.3969/j.issn.1000-3428.2010.19.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of low precision rate and recall rate in the traditional Chinese keyword extraction resulted from indifference of semantic and synonym, Chinese keyword extraction algorithm based on synonym chains is proposed. In the algorithm, the problem of word semantic in the context is solved by using the word of context window and word sense disambiguation algorithm. Synonym chains are built by using synonym of the document which simplifies the selection of candidate words, and the weight formula of keyword which can filter candidate word is brought out by the characteristics of synonym chains. Experimental results show that the proposed algorithm has more precision rate and recall rate in the document with much more synonym, and the average performance can be obviously improved.
  • ZHENG Li-Xiong, CHEN Qiong, CHEN Yong-Meng
    Computer Engineering. 2010, 36(19): 96-98. https://doi.org/10.3969/j.issn.1000-3428.2010.19.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper, a new approach using multiple view tree is proposed to solve the problem of multi-relational classification. In the multiple view classification, different individual view has different contribution to the classification task. Views are complementary to the classification. The complementary property of views is studies and a compute method of views is proposed complementary. The views are chosen to integrate to construct a multi-view tree according to the views complementary. Experimental results show that multi-relational classification method based on multiple view tree outperforms the existing methods in terms of accuracy and efficiency.
  • LIU Sha, HOU Zheng-Feng
    Computer Engineering. 2010, 36(19): 99-101. https://doi.org/10.3969/j.issn.1000-3428.2010.19.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To make the packet classification both fast point location and scalable, this paper presents a new efficient packet classification algorithm of computational geometry. It combines cross-producting with linear search. The proposed algorithm can adjust storage usage by controlling the number of filters through one-dimensional searches, with more filters searched by one-dimensional data structure, the storage needed for the cross-producting table can be further decreased. Experimental result shows that the algorithm not only improves storage performance, but also increases time performance.
  • HE Qun
    Computer Engineering. 2010, 36(19): 102-103. https://doi.org/10.3969/j.issn.1000-3428.2010.19.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The definiteness on row and compositeness on column factors of each attribute value in decision table are computed by applying to definite relation based on rough sets, an extreme value of granular is definited by the factors, the granular with extreme value is constructed to a set of extreme granular. An optimal granular is found by combination logic operation on extreme granular of the set, and the optimal granular is minimization rule. Experimental result shows that the algorithm is efficient.
  • WANG Jun, ZHOU Hua-Hai
    Computer Engineering. 2010, 36(19): 104-106. https://doi.org/10.3969/j.issn.1000-3428.2010.19.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The advantage of the extension from the existing command set of processor is primarily to minimize the time and cost of system design, reduce the code size, limit the command fetching frequency, release the pressure on registers, thus the overall system power consumption is lower. On the basis of this, this paper presents a self-defined instruction generation algorithm combined with the frame of ASAP. The algorithm finds candidate instruction set complying with multiple requirements by self-defined instruction expansion, through data flow analysis, instruction clustering, sub-graph enumerating and sub-graph merging methods. Experimental results show that the algorithm can enumerate all the non-trivial candidates efficiently.
  • GU , π , CHEN Xin-Lai, XU Xiao-Gang, TUN Jing
    Computer Engineering. 2010, 36(19): 107-109. https://doi.org/10.3969/j.issn.1000-3428.2010.19.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The function of database technique is very important in designing general exploitation platform for Virtual Maintenance Training System(VMTS). The database is designed from the designing of the database tables, the communication between program and database, the designing of Web database server. The database designing idea that database separates from logic is offered. Apache, PHP and MySQL are used as database exploitation tools, and the communication between program and database is realized by TGE 3D engine, finally the controlling of the database is realized. Experimental result indicates that using the function of database technique in general exploitation platform for VMTS makes it more general.
  • LUO Lan, CENG Bin
    Computer Engineering. 2010, 36(19): 110-112. https://doi.org/10.3969/j.issn.1000-3428.2010.19.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The existing cyclic association rules have disadvantage to compartmentalize a cycle into several time segments and the base arithmetic disadvantage is low-level efficiency etc. This paper presents CARDSATSV. It chooses the time sequence vector which consists of the support of item to cluster, and uses DB Index to determine the optimal class number of cluster. It brings forward Cyclic FP-tree(CFP-tree) to discover cyclic association rules. CFP-tree handle cycle clipping technology is based on conditional FP-tree to improve efficiency. Experiments show that CARDSATSV can discover more useful cyclic association rules and can improve efficiency, compared with the existing cyclic association rules.
  • XIE Wei-Gong, ZHANG Jian-Jun, ZHENG Meng-Cai, LEI Xin-Guo
    Computer Engineering. 2010, 36(19): 113-116. https://doi.org/10.3969/j.issn.1000-3428.2010.19.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In sensor network, MAC address requires to be unique partially rather than globally, so MAC address can be reused in different regions. In the special virtual grid topology formed by GAF algorithm, different MAC address allocation strategies can be adopted for clusters and nodes. The restrictions of the distribution of MAC address for clusters and nodes are discussed separately, a distributed algorithm for spatial reuse of MAC address is proposed. Theoretical analysis and simulation experiments show that the algorithm can effectively reduce the length of MAC address and energy consumption, and can keep good performance in intensive network.
  • ZHANG Ru-Yun, WANG Yu-Gong, HUANG Kai-Qi, JI Xin-Sheng
    Computer Engineering. 2010, 36(19): 117-119. https://doi.org/10.3969/j.issn.1000-3428.2010.19.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Timely firing of Link-Going-Down(LGD) trigger is critical to handover performance and is significant in determining whether the handover completes successfully. This paper proposes the terminal moving state-based LGD trigger algorithm. It calculates the received signal power from the terminal moving model, and puts it as part of the predictive process. Simulation results show that the proposed algorithm can significantly decrease packet lost and the time between handover finishing and link down.

  • WANG Yu, WANG Yong-Sheng, WANG Li-Bei
    Computer Engineering. 2010, 36(19): 120-122. https://doi.org/10.3969/j.issn.1000-3428.2010.19.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a delay based on QoS simulation model with AMC via wireless system, and further delay constraint characteristic is simulated via finite state Markov channel model for video stream traffic by utilizing Nakagami distribution. Simulaton result reveals that, higher SNR gives a better delay performance of QoS, and the model can consequently be used to improve model accuracy and process.
  • FAN Xiong-Nan, CHEN Qiang-Kui
    Computer Engineering. 2010, 36(19): 123-125. https://doi.org/10.3969/j.issn.1000-3428.2010.19.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper mainly discusses how to reasonable devise the topology of the nodes when the sensing ranges of the sensor nodes in Wireless Sensor Network(WSN) are adjustable. It can implement energy load balancing of the network and prolong the lifecycle of the network. To ensure coverage and connectivity, this paper tries to analyze nodes’ distribution characteristic and topology structure if sensing ranges are adjustable. According to the removing redundant node algorithm, it proposes an algorithm named adaptable adjust sensing radius redundant nodes sleeping algorithm. The result proves that it can apparently improve networks’ energy load balancing level, also maximize nodes’ sensing coverage region and use minimum active nodes.
  • ZHOU Qiu-Hua, JU Yan-Li
    Computer Engineering. 2010, 36(19): 126-127. https://doi.org/10.3969/j.issn.1000-3428.2010.19.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a packet routing strategy with a tunable parameter based on link weights and free-degrees of nodes in a network. Data transmission on BBV weighted network is subsequently studied. A large number of numerical simulations show that when , the network achieves the best performance with the maximal communication capacity, the smaller average transmission time and the smallest load. In comparison with the local link weights routing strategy, the strategy can dramatically improve the network capacity and reduce its load in the jam state. This study may shed insight on controlling the congestion in weighted networks.
  • SHUI Yong-Sheng, FENG An-Ceng
    Computer Engineering. 2010, 36(19): 128-131. https://doi.org/10.3969/j.issn.1000-3428.2010.19.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To meet the communication needs between licensed users and secondary users in cognitive radio and to ensure that the unimpaired of licensed users, combined with interference temperature model and non-cooperative game theory, a logarithmic utility power control algorithm is presented for interference-limited single-cell CDMA cognitive radio system based on SINR. The analysis and simulation results show that the proposed algorithm, compared with the SINR balancing algorithm and the Koskie-Gajic algorithm, is at the premise of guaranteeing the secondary users’ target SINR as well as the licensed users’ interference temperature constraints, can realize the secondary users’ high data speed communication with a proper increase of the secondary users’ transmitting power.
  • CHEN Wei, CHENG Liang-Lun, LEI Xu
    Computer Engineering. 2010, 36(19): 132-133. https://doi.org/10.3969/j.issn.1000-3428.2010.19.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes an enhanced AIMRP protocol——E-AIMRP. Designing for the event detecting and quick reporting applications, it integrates MAC and routing with cross layer mechanisms, solves the problems in AIMRP of low energy efficient resulting from the multi-node event detecting and reports, and enhances the topology of single node detecting model. Simulation results show that E-AIMRP outperforms AIMRP for energy efficiency and time delay in the event-detection applications.
  • CHEN Hui-Na, TANG Meng-Gao
    Computer Engineering. 2010, 36(19): 134-136. https://doi.org/10.3969/j.issn.1000-3428.2010.19.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Formulation of energy efficient protocols is of utmost importance for Wireless Sensor Network(WSN) because of energy constraints of sensor nodes. PEGASIS(Power-Efficient Gathering in Sensor Information System) is one of low energy algorithms for wireless sensor networks. It constructs the chain by using greedy approach. This paper divides the chain into several parts. This method helps enhance network performances by reducing the inter-nodal transmission distances. Simulation results show that the improved algorithm has a higher efficiency of energy utilization.
  • LIANG Chao-Fang, WU Mu-Qing, JUAN Yan
    Computer Engineering. 2010, 36(19): 137-138. https://doi.org/10.3969/j.issn.1000-3428.2010.19.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Opportunistic routing protocol adopts routing-delayed mechanism. The mechanism is a subversion of the traditional routing. This paper designs an opportunistic routing protocol that can adapt to volatility of wireless channel and realizes the protocol on Linux platform as a kernel loadable module. The preliminary results show that opportunistic routing protocol can improve packet delivery and decrease time delay greatly. Most importantly, the mechanism is almost unaffected by the changing wireless channel and can adapt to its instability and unpredictability better.

  • LI Jie, WANG Tao, YANG Wen-Bao, CHEN Hong-Liang
    Computer Engineering. 2010, 36(19): 139-141. https://doi.org/10.3969/j.issn.1000-3428.2010.19.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To overcome the defect of complex and low applicability belong to a common multi-domain network topology discovery algorithm, this paper proposes a topology discovery algorithm based on three layer switching and VLAN. The algorithm abstracts the VLAN network, groups all the VLANs, and does the process of topology discovery in each VLAN group. Analysis results show that it makes the best use of VLAN information during the process of topology discovery, and can accurately discover the topology of a VLAN network.
  • WANG Zi-Chao, CONG Jing, HUANG Yong-Feng, BO Jiao
    Computer Engineering. 2010, 36(19): 142-144. https://doi.org/10.3969/j.issn.1000-3428.2010.19.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper designs and implements a high-precision network emulator based on protocol filtering by employing network driver interface specification technology based on Windows platform. The emulator is capable of emulating various kinds of the network transmission conditions of WAN links in real-time, including network bandwidth, packet loss, transmission delay, and out-of-order transmission. Experimental results indicate that the network emulator achieves high-precision, low-load design requirements. Realization of the emulator can help to study the QoS of multi-media network applications, analyze network protocol, and verify network control algorithms.

  • LONG Jian-Juan, LI Jing-Hua, JI Jian-Bei, LIU Xiao-Gang
    Computer Engineering. 2010, 36(19): 145-147. https://doi.org/10.3969/j.issn.1000-3428.2010.19.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the traditional wireless network protocol, TCP congestion control mechanism can not be a good solution to the problem of missing data. Aiming at this problem, this paper proposes an improved cross-layer design method of wireless network TCP protocol——NTCP. NTCP is divided into the traditional functions of TCP transmission speed control and data integrity control of two parts. It avoids the use of forward node information and maximizes the use of wireless resources. Compared with the traditional method of TCP-Reno, this method can increase the throughput of the wireless network, its stability and rate of adaptive capacity are improved.
  • ZHOU Qin, DAI Jia-Zhu, JIANG Gong
    Computer Engineering. 2010, 36(19): 148-150. https://doi.org/10.3969/j.issn.1000-3428.2010.19.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Energy is limited in Wireless Sensor Network(WSN), and data aggregation is used in WSN to save energy by aggregating redundant data to reduce the data size being transmitted. But the aggregation cost itself is non-neglectable. Focusing on minimizing the total energy cost of the WSN, this paper improves the Adaptive Fusion Steiner Tree(AFST) by substituting the Dynamic Shortest Path Tree(DSPT) routing algorithm for the Shortest Path Tree(SPT) routing algorithm to adapt to the changes of network environment and data size. Experimental and analysis results show that using the DSPT routing algorithm for directly relayed data has a better performance in energy saving and efficiency than SPT routing algorithm when data size changes.
  • CHEN E-Juan, MENG Xian-Meng, JIN Yuan-Beng
    Computer Engineering. 2010, 36(19): 151-153. https://doi.org/10.3969/j.issn.1000-3428.2010.19.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To balance the cost of location update and paging, this paper makes use of GLR(Gateway Location Register) three level database structure, and designs a movement-area-based dynamic location management method. This method can optimize the movement-area size according to the subscriber’s speed and call arrival rate to reduce the total cost of the location management. Proper paging strategies are chose to further optimize the performance of location management depending on the call mobility rate. Simulation results show that this method can get a corresponding optimal movement threshold.
  • MENG Xian-Yong
    Computer Engineering. 2010, 36(19): 154-155,158. https://doi.org/10.3969/j.issn.1000-3428.2010.19.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a new multi-bank E-cash scheme based on bilinear pairings, by using proxy-signcryption technology to realize multi-bank proxy authorization and the E-cash generation. Theoretical analysis results show that using the bilinear pairings can design a safer, simpler and highly effective multi-bank E-cash scheme, because with signcryption technology, it makes the agreement simpler and highly effective, its computation and communication costs are relatively lower, so that the scheme is more suitable for the safe mobile termination and all kinds of embedded equipments.
  • GAO Jing-Zhe, DIAO Xin-Jie, JIAO Wen-Cheng, TIAN Jun-Jian
    Computer Engineering. 2010, 36(19): 156-158. https://doi.org/10.3969/j.issn.1000-3428.2010.19.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper examines the strength of CLEFIA block cipher against multiple bytes differential fault attack. It presents a principle of CLEFIA and differential fault analysis. According to inject faults into the rth, r-1th, r-2th round three conditions, it proposes a new fault analysis method on CLEFIA based on multiple bytes fault model, and verifies it through software simulation. Experimental result demonstrates that due to its Feistel structure and S-box feature, CLEFIA is vulnerable to multiple bytes fault attack, 6~8 faulty ciphertexts are needed to recover full 128 bit key.
  • HU Jiang-Gong, SHU Xiao-Ning, ZHANG Jian-Zhong
    Computer Engineering. 2010, 36(19): 159-161,164. https://doi.org/10.3969/j.issn.1000-3428.2010.19.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A new multi-proxy signature scheme with a self-certified public key cryptosystem is proposed in this paper, which is based on the elliptic curve cryptology. The new scheme combines the proxy signature with the secret sharing theory, which solves the problem of very collectiveness of the proxy signers’ power. It introduces a new self-certified public key cryptosystem, namely, CA needs not to certify every user’s public key, it can resist forgery attack and public-key substitute attack. The proxy group’s secret key is cooperatively generated by all the proxy signers, communication is reduced and safety is improved in the new scheme.
  • LIU Wei-Gong, WANG Li-Bin, MA Chang-She
    Computer Engineering. 2010, 36(19): 162-164. https://doi.org/10.3969/j.issn.1000-3428.2010.19.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper describes a cross-realm C2C-PAKE protocol. Based on it, it improves the formal model and proposes an improved protocol which is introduced public key mechanism to system security and combined with the high reliability of discrete logarithm. The protocol is simple and is analyzed with semantic security and key confidentiality. It also achieves the mutual authentication between server and client, and it can resist common attacks such as undetected online dictionary attack. Security analysis shows it is safe and effective.
  • XUE Yan-Dong, HAN Xiu-Ling, DAI Chang-Fei
    Computer Engineering. 2010, 36(19): 165-167. https://doi.org/10.3969/j.issn.1000-3428.2010.19.057
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on Snort, this paper presents a Distributed Cooperative Intrusion Detection System(DCIDS). Through introducing subjective trust theory and feedback theory into cooperative detection, the system reduces the probability of misjudge, and improves self-adapting capability. It introduces transmission protocol between nodes, and proposes the cooperative scheme and the trust level update algorithms. The system is tested by simulated intrusion, and result shows that it completes the cooperative detection, and reduces the probability of misjudge properly.
  • HU Bei, TU Jian-Beng
    Computer Engineering. 2010, 36(19): 168-170. https://doi.org/10.3969/j.issn.1000-3428.2010.19.058
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    There are some problems such as slow convergence, low stability and poor global optimization ability when intelligent search algorithms solve single task Agent coalition. This paper uses an improved Quantum-behaved Particle Swarm Optimization(QPSO) to solve the problems. By using the better recording locations of all particles and the mutation of the best behaved particle, and based on public history researching Parallel, the particle swarm is filtrated and the convergence speed is accelerated. Multiple particle swarms are used to research parallel, avoiding running into local optima. Comparative experimental results show that the algorithm can identify the Agent alliance quickly and efficiently. Its run-time performance is better than other algorithms.
  • HU Chun-Hua, JIAN Kun
    Computer Engineering. 2010, 36(19): 171-173. https://doi.org/10.3969/j.issn.1000-3428.2010.19.059
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a novel pedestrian detection method. An angular-diffused Shape Context(SC) descriptor is proposed to obtain the histogram of the sampled edge map with considerations of different edge orientations. Modified Hausdorff Distance(MHD) is employed as the similarity likelihood between the codebook of the model and the test image. A voting map of the central hypotheses of pedestrians is generated, followed by the foreground and background segmentation which utilizes the binary masks of the templates. In the people candidate regions, a color- segmentation-based verification and a SVM-based shape classifier are subsequently performed to reduce two types of false positive results. Experiments on a pedestrian image database and PASCAL database illustrate the improved performance of the angular-diffused SC in representing the shape of up-right human body as well as the reduced false positive rate introduced by the two-step verification.
  • SHU Fang, GU Jun-Hua, YANG Xin-Wei, YANG Rui-Xia
    Computer Engineering. 2010, 36(19): 174-176. https://doi.org/10.3969/j.issn.1000-3428.2010.19.060
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the bottleneck of SVM that the speed of classification depends on the number of support vectors, this paper proposes a fast classification algorithm for SVM. In feature space it constructs the minimum spanning by introducing the similarity measure and divides the support vectors into groups according to the maximum similarity. The determinant factor and the adjusting factor are found in each group by some rules. In order to simplify the support vectors, it takes the linear combination of determinant factor and adjusting factor to fit the weighted sums of support vectors in feature space, so that the speed of classification is improved. Experimental results show that the algorithm can get higher reduction rate of classification time by minor loss of classification accuracy and it can satisfy the requirements of real-time classification.

  • MAI Xiong-Fa, LI Ling
    Computer Engineering. 2010, 36(19): 177-179. https://doi.org/10.3969/j.issn.1000-3428.2010.19.061
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to realize combined application of preference and group decision, this paper presents multi-object particle swarm optimization algorithm based on group distance. It leads the particle flying to the solution preference area by adjusting the group distance between particle and the solution reference point step by step, and applies grid strategy and improved pruning strategy to maintain the diversity of solution in Pareto boundary. The computing cost is reduced and the convergence rate is improved through a preferred and a smaller set of Pareto optimal solution is found. Experimental results show that the solutions fund by the algorithm is nearer the Pareto front and is effective to all decision members.
  • BANG Min, TANG Dun
    Computer Engineering. 2010, 36(19): 180-181. https://doi.org/10.3969/j.issn.1000-3428.2010.19.062
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the clone and affinity-variation of antibody in Biological Immune System(BIS), this paper presents a detector real-time generation algorithm which can change the current detector set to apply in Intrusion Detection System(IDS). Theoretical analysis and experiments show that this algorithm can require less evolution algebra to detect lots of abnormal change, reduce the false negative rate and false detection rate for IDS, and improve the alarm credibility.
  • MAO Kai-Fu, BAO An-Qing, XU Chi
    Computer Engineering. 2010, 36(19): 182-184. https://doi.org/10.3969/j.issn.1000-3428.2010.19.063
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To update the performance of the standard optimization method, a new Particle Swarm Optimization(PSO) algorithm is proposed based on the earlier works. A non-symmetric learning factor adjusting method introduced here is to keep the balance between the global search and the local search with the great advantages of convergence property and robustness compared with standard PSO algorithm. The relationship between swarm average velocity and convergence is studied through Benchmark test functions simulation. All the merits mentioned above are demonstrated by the compound gear transmission ratio optimization in transmission systems.
  • MAO Lian-Meng
    Computer Engineering. 2010, 36(19): 185-187. https://doi.org/10.3969/j.issn.1000-3428.2010.19.064
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To settle the contradictory between convergence speed and precocity and stagnation in ant colony algorithm, an adaptive ant colony algorithm, which is based on dynamic change of parameters and mutation, is presented by analyzing in-depth the parameters of model. The parameters are divided into global parameters and local parameters. The methods, of which the local parameter q0 changes accordingly with the change of the quality and the global parameter ? changes accordingly with the number of average node branching, are designed, which make the global searching ability enhance remarkably. A simple and efficient mutation algorithm is adopted to accelerate convergence speed. Experimental results of TSPLIB show that the method presented in this paper has much higher quality and stability and convergence speed than that of classical ant colony algorithm.
  • ZUO Ping-Beng, SUN Bin, GU Hong, JI Dong-Lian
    Computer Engineering. 2010, 36(19): 188-189,192. https://doi.org/10.3969/j.issn.1000-3428.2010.19.065
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the advantages of rapidly converging and light load inside for Sequential Minimal Optimization(SMO), this paper transplants it into 1-Fuzzy Support Vector Machine(1-FSVM). In order to enhance the training speed, 1-FSVM algorithm uses hierarchical Binary tree structure to cluster step by step, takes different weighting in every level for different input vector to correctly express the classification effect. Application result in light recognition handwritten numeral sets and license plate location show that 1-FSVM algorithm has a high detection rate and speed.
  • DIAO Shan, DIAO Qian
    Computer Engineering. 2010, 36(19): 190-192. https://doi.org/10.3969/j.issn.1000-3428.2010.19.066
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that Direct Current(DC) coefficient and Alternating Current(AC) coefficient express different image content information, this paper presents a JPEG image retrieval algorithm based on DCT coefficient. The difference vector of DC coefficient is constructed to describe image feature. The AC coefficient distribution entropy is presented according to the characteristic of the AC coefficients. The weight function for the entropy is presented to avoid the mistaken retrieval and losing retrieval. Experimental results show that the algorithm has no use for decompression and has lower complexity, and it can embody the content distribution of images well.

  • ZHANG Li-Wei, XIE Shao-Rong, LUO Jun, WANG Chao
    Computer Engineering. 2010, 36(19): 193-194,197. https://doi.org/10.3969/j.issn.1000-3428.2010.19.067
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on the binocular stereo vision theory, a binocular vision system is presented, whose axis angle can be changed real time. Through dealing with real-time acquisition of the target image, the axis angle of stereo vision sensor can be calibrated, and the real-time height of unmanned helicopter can be achieved. Experimental results show that the calibration error of the axis angle is less than 5%. It can control stereo vision sensor rotate accurately, avoid the landing error caused by the blind spot of the binocular vision, and provide the security for the unmanned helicopter autonomous landing.
  • XU Yin-Ling
    Computer Engineering. 2010, 36(19): 195-197. https://doi.org/10.3969/j.issn.1000-3428.2010.19.068
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the accuracy and robustness of Support Vector Domain Classifier(SVDC), KSVDD is proposed based on K-Nearest Neighbor(KNN) and Support Vector Domain Description(SVDD). The classifier takes SVDD determination for test samples inside single class, and adopts the KNN rule for test samples inside the overlapped regions or outside the description boundaries. By rejecting samples outside the description boundaries, the classifier can also be generalized to rejection determination. Numerical experiments on UCI data show that KSVDD has higher accuracy over SVDC, is comparable with SVM, has lower training time than SVM, is more robust and has good performances in rejection determination.
  • HE Jiang-Ping, MA Pan
    Computer Engineering. 2010, 36(19): 198-199,202. https://doi.org/10.3969/j.issn.1000-3428.2010.19.069
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A new method based on edge information is proposed. It studies the use of fast symmetry and Hough transform in detection of the triangle traffic sign. The inner circle point of the triangle is detected by fast radial symmetry, and a specified window is used to compute the Hough transform of the small regions of the image. A triangle is detected in Hough filed. Experimental result shows that the algorithm makes up for the shortcomings of existing algorithms and has higher detection efficiency.
  • LIU Cai-Yun, CHEN Zhong, XIONG Jie
    Computer Engineering. 2010, 36(19): 200-202. https://doi.org/10.3969/j.issn.1000-3428.2010.19.070
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing Colony system for the large scale optimization combination problem needs long computation time. Aiming at this shortage, this paper presents coarse grained asynchronous parallel implementation Parallel Max-Min Ant System(PMMAS) based on Message Passing Interface(MPI) which can reduce the cost of communication of parallel computation and guarantee the quality of solution. Numerical experiment result which is obtained on the dawn 4000L parallel computer indicates that the system has well speedup and speedup efficiency, and it is suitable for large-scale under the prerequisite of solving Traveling Salesman Problem(TSP).
  • KONG De-Yong, ZHANG Jian-Jun
    Computer Engineering. 2010, 36(19): 203-204,207. https://doi.org/10.3969/j.issn.1000-3428.2010.19.071
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problems of meta search engine such as precision, this paper analyzes professional search engine returns results professional correlation in-depth exploration of a given Web page based on genetic algorithm sorting algorithm. In mining to return to the results of independent search engine, in addition to location information of the other information based on the use of genetic algorithm to build a professional Web site correlation model for Web pages related to the degree of professional computing. The algorithm discusses the implementation of the principle and provides programs to improve the algorithm. Experimental results show that the algorithm is effective.
  • WANG Jue, HUANG Xia, JU Yong-Ning
    Computer Engineering. 2010, 36(19): 205-207. https://doi.org/10.3969/j.issn.1000-3428.2010.19.072
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to detect bubble defects in Digital Radiography(DR) images of railway castings automatically, snake model is applied to segment these defects, and the method of getting the initial contour of Snake model is improved. The barycenter of every bubble is achieved by using automatic thresholding and region growing method. A method based on radial is applied to get the series of initial control-points of Snake model. These control-points are converged by Greedy algorithm. Experimental results from simulated images detecting show that the minimal size which this method can detect is 3×3 pixels, and the initial contour may be able to converge to the concave areas. Experimental results from real DR images detecting prove that this algorithm can detect the contour of bubbles in detecting region exactly, and there will be no false defects in the results, and also this algorithm improves the automation of defects detecting.
  • JIN Wang-Beng, LI Feng-Fei, HUI Sui, LIANG Dong
    Computer Engineering. 2010, 36(19): 208-209,212. https://doi.org/10.3969/j.issn.1000-3428.2010.19.073
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes an active contour algorithm for object detection in multi-channel images. The model is an extension of the scalar C-V algorithm to the vector-valued case. The model minimizes a Mumford-Shah functional over the length of the contour, plus the sum of the ?tting error over each component of the Multi-channel image. Like the C-V model, the vector-valued model can detect edges with or without gradient. It solves the problem where the model detects multi-channel objects which are undetectable in any scalar representation. It improves initial function to increase segmentation speed.
  • LI Zhao-Xin, ZHANG Da-Kun
    Computer Engineering. 2010, 36(19): 210-212. https://doi.org/10.3969/j.issn.1000-3428.2010.19.074
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A feature-based algorithm for image mosaics of scenes with moving objects is proposed. The algorithm eliminates the feature points which are selected in moving objects using similarity transform, and the outliers are eliminated using RANSAC algorithm. In the image blend phase, a difference of two images is created using homogeneous transformation matrix, and a region growth algorithm is implemented so as to segment the moving object region. A piecewise mapping algorithm for generating panoramic images is used. Experimental results show the effectiveness of the algorithm.
  • LAI Pan-Dong, CHEN Fen, LIU Xiao-Yun
    Computer Engineering. 2010, 36(19): 213-215,218. https://doi.org/10.3969/j.issn.1000-3428.2010.19.075
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The restored image which takes advantage of Wiener filter or least squares algorithm occur ringing waves in the place where the gray scale value of image is jumping. The constrained self-adaptive image restoration algorithm effectively overcomes the ringing waves through local adaptive control of restoration and smoothness. On the basis of the constrained self-adaptive image restoration algorithm, the ability of self-adaptive control of restoration is improved by increasing the precision of the weighted array, and the artifacts occurred by boundary truncation are removed by adopting Neumann boundary condition. Experimental results show that the improved algorithm gains better recovery effect.
  • WANG Hui-Feng, ZHOU Li-Chi, ZHANG Jie
    Computer Engineering. 2010, 36(19): 216-218. https://doi.org/10.3969/j.issn.1000-3428.2010.19.076
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Bicubic interpolation is an effective way to get well-qualified high-resolution image, but with high calculation load. The common used interpolation methods are discussed. A new region-based bicubic interpolation is proposed. Without segmenting the image, the mean value of the four neighboring points of the interpolated point is calculated which is used to divide the image into two regions: the flat region and the complex region with more details. Two different interpolating algorithms are chosen for each region. Experimental results show that the proposed algorithm can keep the image quality equal to the original algorithm while reducing the calculation load more than 10 percents. It is useful for applications.
  • WANG Dong, ZHOU Shi-Sheng
    Computer Engineering. 2010, 36(19): 219-221. https://doi.org/10.3969/j.issn.1000-3428.2010.19.077
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a novel learning-based approach for pyrography painting style simulation. Based on Hertzmann’s image analogies algorithm, it proposes a novel Particle Swarm Optimization(PSO) combined with the roulette selection operator to speed up the image processing. This algorithm selects the best position found by the swarm so far with roulette wheel selection method and the probability of premature converge to local minima is decreased. Compared with the Approximate Nearest Neighbor(ANN) search, it can obtain faster processing speed. In order to compensate for visual defects on the output texture in Hertzmann’s algorithm, it converts RGB signals to perception-based color space lαβ before the additional computing. Experimental results demonstrate the algorithm is efficient for pyrography painting simulation.
  • XU Yong-Qing, WANG Shu-Wen, LI Xiang-Qun
    Computer Engineering. 2010, 36(19): 222-223,226. https://doi.org/10.3969/j.issn.1000-3428.2010.19.078
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    As the quality of the repaired image is significantly influenced by the filling order of the boundary pixels, this paper tries to compute priority to improve the exemplar-based algorithm by means of D-S evidence theory, according to analyzing the pixel’s surrounding image features and taking into account the proportion of the image texture features and structural characteristics. Experimental results of the algorithm is given and proved to be effective in increasing the repaired image’s vision quality.
  • JIA Kai-Jian, TAO Yu-Feng, ZHONG Shan, CHANG Jin-Xi
    Computer Engineering. 2010, 36(19): 224-226. https://doi.org/10.3969/j.issn.1000-3428.2010.19.079
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Based on expatiating on image wavelet fusion algorithm, an algorithm using mathematical morphology wavelets transform is proposed. The image is decomposed into high-frequency image and low-frequency image which are stored in the extended matrix. In the low-frequency component coefficient of selection, mathematical morphology approach is used for the edge detection. After getting the edge image, the weighted average method is used to fuse the scale coefficients of the edge image. To the high-frequency image, these maximal absolute values are selected and the consistency of these coefficients is verified. This algorithm is tested by some images. Experimental result shows that the proposed method is more outstanding than the conventional methods, and effectively improves the resolution of image.
  • TUN Ao, FANG Xiang-Zhong, DONG Hao-Yuan
    Computer Engineering. 2010, 36(19): 227-228,231. https://doi.org/10.3969/j.issn.1000-3428.2010.19.080
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Without knowing scene structure, caustic distortions due to multi-viewpoint imaging can not be removed. Based on the skewed light rays in multi-viewpoint imaging, this paper extracts depth information from structured features in distorted images, and performs 3D Euclidean reconstruction from a single multi-viewpoint image. The reconstructing principle includes all curves of known functional form, and provides specially designed mathematical approaches for features such as space lines and circles.
  • TAN Hong-Bei, HOU Zhi-Jiang, LIU  Rong, GUO Wei-Wu
    Computer Engineering. 2010, 36(19): 229-231. https://doi.org/10.3969/j.issn.1000-3428.2010.19.081
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    An adaptive marker extraction-based watershed algorithm is proposed to overcome the over-segmentation problem. By combining local minima depth and water basin scale information, markers are adaptively extracted for local minima, and the threshold for marker-extraction is automatically calculated according to the statistics of local extreme points in the gradient map. These markers are imposed on the original gradient map as its local minima. The watershed algorithm is applied on the modified gradient map to segment the image. Simulation results show that the proposed method can efficiently reduce over-segmentation with scarcely computational complexity increase. It has better anti-noise performance and edge-location capability as well.
  • ZHANG Zhi-Fu, KANG Zhi-Wei
    Computer Engineering. 2010, 36(19): 232-233,236. https://doi.org/10.3969/j.issn.1000-3428.2010.19.082
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper propses a video moving object segmentation algorithm based on h.264 compressed domain. Motion vector field extracted from h.264 compressed stream is processed by noise processing and median filter. An iteratively backward projection scheme is proposed to obtain an accumulated motion vector filed. Moving object region is extracted in turn based on k-means cluster. Experimental results show moving object is segmented well without decoding video compressed stream.
  • LIN Qing-Bing, CHEN Yuan, JIANG Wei, HUANG Zi-Wu
    Computer Engineering. 2010, 36(19): 234-236. https://doi.org/10.3969/j.issn.1000-3428.2010.19.083
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a block-matching algorithm named Small Diamond-Linear Search(SDLS) using small diamond-linear search pattern. For stationary blocks, SDLS can find the Motion Vector(MV) by one small diamond search. For blocks with motion, SDLS gets a block distortion decent direction from the distribution of computed block distortions. By using a mixed small diamond-linear search pattern, it can reduce the number of search points and locate the MV rapidly. Through using the search center prediction, SDLS can further improve the search speed and quality. Experimental results show that the proposed algorithm can decrease the search points by more than 50%, compared with Diamond Search(DS), Cross-Diamond Search(CDS), etc., while maintaining similar PSNR of pictures.
  • NIE Xiu-Shan, LIU Ju, QIN Feng-Lin
    Computer Engineering. 2010, 36(19): 237-238,243. https://doi.org/10.3969/j.issn.1000-3428.2010.19.084
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A digital video watermark scheme based on Isometric Mapping(ISOMAP) is proposed. The frames are mapped to points in the two dimensional space using ISOMAP. Watermarks are embedded into the differences between the frames of video and their images under the mapping through Singular Value Decomposition(SVD). Experimental results show that it has very strong robustness against spatial desynchronization attacks such as rotating, scaling and clipping. It achieves high robustness against noises and median filtering. In addition, the scheme can resist temporal desynchronization such as frame dropping and inserting to some extent.
  • WEN Jing, HAN Xie-Fei
    Computer Engineering. 2010, 36(19): 239-240,243. https://doi.org/10.3969/j.issn.1000-3428.2010.19.085
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A video Hash algorithm based on 3D Wavelet Transform(3D WT) is proposed to improve the security of video watermark system. Pre-processed video sequence is applied in turning row-dimensional wavelet transform, column-dimensional direction wavelet transform and time-dimensional wavelet transform, to complete three-dimensional discrete wavelet transform, and the Hash computing result is given. Experimental results show the robustness of the proposed video hash algorithm for intra-pixel moving attack and random frame jitter attack.
  • MAO Yun-Liu, HUANG Dong-Jun
    Computer Engineering. 2010, 36(19): 241-243. https://doi.org/10.3969/j.issn.1000-3428.2010.19.086
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A video segmentation method based on Speed Up Robust Features(SURF) is proposed, which is combined with Independent Component Analysis(ICA), and through the Singular Value Decomposition(SVD) to embed the video watermark. The method has the good robust against collusion attack and temporal desynchronization, meanwhile increases the robust against geometric attacks. Experimental results show the method is robust enough against several kinds of watermark attacks, such as geometric attacks, frames discarding and decreasing.
  • XU Juan, HAN Jiang-Hong, ZHANG Jian-Jun, ZHANG Li
    Computer Engineering. 2010, 36(19): 244-246. https://doi.org/10.3969/j.issn.1000-3428.2010.19.087
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    How to use the abundant diagnosis resource in WAN and how to implement the cooperative diagnosis are the main targets of remote cooperative diagnosis. On the basis of the introduction of grid technology and analyzing the working principle of the remote cooperative diagnosis system, the system framework and the workflow are provided. One of the key technical points in the system, the job management module, is designed and the resource scheduling algorithm is realized. A more effective idea for the realization of the complex equipment remote collaborative fault diagnosis is derived.
  • FENG Xiao-Gang, LI Dui, CHEN Chong-Cheng
    Computer Engineering. 2010, 36(19): 247-249. https://doi.org/10.3969/j.issn.1000-3428.2010.19.088
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper designs and constructs the prototype system of the framework of the distributed virtual forest and the fire fighting simulation environment to meet the requirements of performing fire fighting simulation tests on multiple scenarios. Based on this framework, it designs the federation of forest fire fighting, including fed model, som and fom design, object class of federation, alternation class of federation, and designs the process of the simulation and the time management of the time control and time limited. Based on all above work, a prototype of virtual fire fighting software is developed based on the HLA, VC++ and the OpenGL on the local area of network.
  • HU Zhi-Gang, YUAN Ming-Ju, JIANG Xiang-Chao
    Computer Engineering. 2010, 36(19): 250-252. https://doi.org/10.3969/j.issn.1000-3428.2010.19.089
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of existing researches without taking into account the impact on power consumption when access different Scratch Pad Memory(SPM) addresses sequence. This paper proposes a SPM low power strategy based on circuit activity through reorganizing the layout of instructions and data in SPM, it reduces the circuit activity and power consumption when access memory objects in SPM. Experimental result shows that an average reduction of more than 15% in the energy consumption, compared with basic strategy without the consideration of circuit activity.
  • MEI Song-Zhu, LI Zong-Ba
    Computer Engineering. 2010, 36(19): 253-255. https://doi.org/10.3969/j.issn.1000-3428.2010.19.090
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The NFTL algorithm shows a lot of deficiencies, such as low space utilization, low erase efficiency, to ensure the read speed at the expense of Flash read performance advantage, and it does not apply to solid-state drive. This paper presents an improved program for the shortcomings of NFTL algorithm using additional memory storage, including valid bit map and the reverse mapping table metadata to make NFTL algorithm also apply to solid-state drive and it makes some performance improvements.
  • XU Rong-Long, LIU Zheng-Cha
    Computer Engineering. 2010, 36(19): 256-257,260. https://doi.org/10.3969/j.issn.1000-3428.2010.19.091
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to evaluate the menu selection performance of an interactive system quickly for researchers, a model is derived based on Fitts’ law and Steering law. An experiment is executed to test its’ validity. There are two phases in the experiment. The goal of the first phase experiment is to calculate coefficients of the model. They can be applied to the data analysis of the second phase experiment. The goal of the second phase experiment is to test the validity of the model. Experimental results indicate that the predicted selection performance with the model coincides with the observed selection performance in the experiment very well and the relativity between is 0.959.
  • LI Xiao-Li, DU Zhen-Long
    Computer Engineering. 2010, 36(19): 258-260. https://doi.org/10.3969/j.issn.1000-3428.2010.19.092
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Conventional search engine is unsuitable for search requests from people with different cultures, different goals or periods. A method on personalized search engine is proposed in the paper, which analyzes the user search pattern from Cookie, constructs the user interests vector from search pattern, drives the engine by search pattern, hence the presented approach can index the results with high correlation to user interests. It implements the personalized search engine based on Lucence, and experiments show that user interests driven personalized search engines can significantly index the user interested results.
  • HAN Fei
    Computer Engineering. 2010, 36(19): 261-262,265. https://doi.org/10.3969/j.issn.1000-3428.2010.19.093
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the fault of fault method, the paper presents a new method for terrain generation. Fault method is used to generate height field of terrain, and then a linear filtering algorithm is constructed to smooth the height field to eliminate the nick. The fractal method is put into use to add multilayer details for terrain. This method results in natural terrain with rich details and reality. Test results show the method is practical in virtual environment and realistic terrain can be generated on general PC platform.
  • HUANG Qi-Fu, CHEN Jian-Hong
    Computer Engineering. 2010, 36(19): 263-265. https://doi.org/10.3969/j.issn.1000-3428.2010.19.094
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To maximize the use of non-renewable mineral resources to achieve the ore reserves of mining enterprises, mining grade and cost, the price of dynamic linkage between the four elements and optimization, Genetic Algorithm(GA) is adopted to solve production and operation of mining enterprises multi-objective parameter optimization. Through the establishment of the four optimized solution model, will affect the elements of multi-objective optimization genetic algorithm as the initial chromosome, and genetic algorithms are re-defined to solve mine enterprise production and operation of multi-objective parameter optimization to achieve a dynamic production and operation of mining enterprises parameter optimization model. The use of this model, according to mineral price changes can be simple, quick and intuitive adjustment of mining costs to determine profit and loss limits of taste, cost and price ratio and the reserves available for the situation.
  • ZHOU Jian, BO Jia-Xin, CHENG Ke-Qi
    Computer Engineering. 2010, 36(19): 266-268. https://doi.org/10.3969/j.issn.1000-3428.2010.19.095
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In the process of BBV weighted scale-free network evolving, the new nodes obtain the entire networks when the new nodes are added to networks. But in some real complex networks, only a few nodes can obtain the global information, most of the nodes own only obtain local information of the networks. The new local-world model is introduced into BBV, and the new local-world BBV weighted network model is proposed. According to theoretical analysis, the strength of nodes of this network model obeys power-law, and the exponent of power-law can be adjusted between 1 and 3 through controlling the parameters. Furthermore, the results of the simulation are in agreement with the theoretical analysis.
  • JIN Jun-Hang, ZHANG Da-Fang, HUANG Hun
    Computer Engineering. 2010, 36(19): 269-271. https://doi.org/10.3969/j.issn.1000-3428.2010.19.096
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to analyze and evaluate the existed regular expression matching algorithms, it implements them such as DFA, D2FA, CD2FA, mDFA and XFA, and conducts extensive experiments on short rules to evaluate the performance of these algorithms. Experimental results show that compared with mDFA, XFA achieves 84.9%~89.9% memory reduction; XFA only increases 38.9%~174.6% matching times in comparison with mDFA; when the number of rules increases by 8 times, mDFA increases 64 times memory space, while XFA only increases 16 times memory space and increases 61.3% matching time, which demonstrates that XFA has good scalability in terms of memory requirements and matching efficiency.
  • HE Xiao-Zhong, HUANG Yong-Zhong, YANG Yue, NA Yu
    Computer Engineering. 2010, 36(19): 272-273,276. https://doi.org/10.3969/j.issn.1000-3428.2010.19.097
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In service oriented architecture, transparent service take-over is the main method of reducing the probability of dissatisfaction of service requesting from the consumer when the service is unavailable. For the purpose of transparent service take-over, the client-side and proxy-based approach can avoid the situation of the whole system service availability decreasing, which is probably caused by overloading when using the centralized one on the server side. In the client-side and proxy-based approach, automatic generation of proxy source code from service description can obviously improve the efficiency and decrease the difficulty of the developing process.
  • LI Bu-Sheng
    Computer Engineering. 2010, 36(19): 274-276. https://doi.org/10.3969/j.issn.1000-3428.2010.19.098
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a data hiding method to combat a variety of forensic software analysis and investigation. This method allows users to select an appropriate file as a carrier from the target file system, and the hiding data is processed by an algorithm of symmetrical encryption and XOR before embedding in the carrier file. After that, the hiding data is embedded into normal files, as the same time it should make sure that the file can be opened correctly. This method solves the problem of consuming lots of time to search for free space to hide the file. It is fast and has a strong ability to fight against computer forensic.
  • QU Jian, HONG Bin-Jiang, QU Jing, LIU Jiang
    Computer Engineering. 2010, 36(19): 277-279,282. https://doi.org/10.3969/j.issn.1000-3428.2010.19.099
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at lower efficiency of components retrieval from component database in reconfigurable network routing platform, a component classification method of software and hardware based on facet is presented. Then a component retrieval algorithm based on Back Propagation Neutral Networks(BPNN) is designed. It is analyzed that the algorithm fits for component database with large size and complicated constitution because of its parallel execution, high efficiency and speed. It is indicated that the algorithm improves the retrieval performance in contrast with most of current retrieval algorithms by the emulation, especially on large size component database.
  • GUO Wen-Huo, CHEN Gong, LIU Mo-Jun
    Computer Engineering. 2010, 36(19): 280-282. https://doi.org/10.3969/j.issn.1000-3428.2010.19.100
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To meet the needs of frequent changes of enterprise business processes, this paper structures a data share and exchange platform based on SOA by using Web service and Enterprise Service Bus(ESB) technology. Cross-departmental data exchange and information sharing can be realized by the platform. A design example of the platform is given. It can solve the problems of practical operations of enterprise product information inconsistencies.
  • LAI Xiao-Fei, CA Min
    Computer Engineering. 2010, 36(19): 283-284,287. https://doi.org/10.3969/j.issn.1000-3428.2010.19.101
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    iSLIP algorithm requires the user update the pointer only after the first iteration, or there may be connection starved. But this pointer update mode will cause some trouble in the network under some traffics. This paper evaluates the algorithm performance while updating the pointer after every iteration under random network traffic. Results show that updating pointer after every iteration will not starve the connection. Under the Bernoulli traffic and on-off traffic, its performance is as good as the former one.
  • LEI Gui-Yan, GUO Quan
    Computer Engineering. 2010, 36(19): 285-287. https://doi.org/10.3969/j.issn.1000-3428.2010.19.102
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper presents a solution of medicine molecular docking combined with grid techniques to solve problems such as too big searching space, too much docking time and too complicated computing environment. Based on GAsDock——a docking evolvement model for multi-population competition mechanism of GA, and uses informational entropy, this paper controls the space narrowing, improves the evolutional reasonability and docking efficiency. Then local tree-structure and error-tolerance mechanism are provided. The docking time is reduced and the grid resources are used more efficiently. Text results show that the combination of refined medicine molecular docking design and grid techniques are reasonable and efficient.
  • YAN Zhi-Jia
    Computer Engineering. 2010, 36(19): 288-290. https://doi.org/10.3969/j.issn.1000-3428.2010.19.103
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To allow the user watching several animations of the same object in one scene, this paper introduces the basic module and process of animation implemented by 3D modeling system based on UML and through “entirety-part” method. With the combination of object-oriented design method and the management of key frames as the main line, it analyzes detailedly practical cases. It provides multi-functional debugging of animation with a double-node cross list structure in the organization of key frame. A basis for further code realization is provided through the drawing of realized partial function.
  • SHI Wei-Min, SHI Chun-Hui, CHAI Xiao-Li, ZHANG Le
    Computer Engineering. 2010, 36(19): 291-封三. https://doi.org/10.3969/j.issn.1000-3428.2010.19.104
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    With respect to the requirement of high speed communication between two kinds of fabric in a high performance embedded system, this paper proposes a hardware method for RapidIO-FC switching bridge on FPGA. This method builds commutator control logic and bridge interface with RapidIO and Fibre Channel(FC) IP core based on the development platform of Xilinx Virtex5. The function of the bridge is validated. Hardware architecture diagrams and the key design thoughts are alse introduced. The method is proved valid by logic simulations and physical tests.