Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 May 2016, Volume 42 Issue 5
    

  • Select all
    |
  • LEI Changjian,LIN Yaping,LI Jinguo,ZHAO Jianghua
    Computer Engineering. 2016, 42(5): 1-7. https://doi.org/10.3969/j.issn.1000-3428.2016.05.001
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Since the nodes of the volunteer cloud have the characteristics of high dynamic and low reliability,the volunteer cloud is prone to Byzantine fault.Byzantine consensus algorithm can make the system keep consistent when f malicious nodes are present.However,the existing algorithms have high redundancy degree.Aiming at this problem,this paper proposes a Byzantine fault tolerance algorithm based on Gossip protocol,which reduces the system redundancy degree to 2f+1.It does not need to set the master node and all the computing nodes in system are set to be in peer status,so as to avoid single point failure in master-slave redundancy system.Theoretical analysis and experimental results show that the proposed algorithm can not only satisfy the Byzantine tolerance requirement,but also reduce the system redundancy degree.Compared with BFTCloud and Zyzzyva algorithm,it improves the system throughput.
  • HUANG Dongmei,SUI Hongyun,HE Qi,ZHAO Danfeng,DU Yanling,SU Cheng
    Computer Engineering. 2016, 42(5): 8-12. https://doi.org/10.3969/j.issn.1000-3428.2016.05.002
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Marine monitoring data has the characteristics of large scale and strong correlation.How to effectively layout the data and how to improve the execution efficiency of data management and application are the keys in current marine data research.With the integration of Internet-plus and digital ocean,a marine monitoring data layout strategy in cloud environment based on the correlation of monitoring data is proposed.In view of the characteristics of marine monitoring data in digital ocean,a strong correlation matrix is established according to the correlation of monitoring tasks,monitoring points and monitoring data.This brings data with high correlation together in the matrix arrangement.It divides the data based on the correlation matrix.Consequently,the data in different group can be distributed to different data center based on the capacity.Experimental results show that the strategy reduces the running time of the algorithm and the response time of marine monitoring data access.Besides,the strategy provides an effective method to the management and layout of marine monitoring data in digital ocean.
  • ZHANG Jinfang,WANG Qingxin,DING Jiaman,LIU Yanjun,HUANG Xin
    Computer Engineering. 2016, 42(5): 13-17. https://doi.org/10.3969/j.issn.1000-3428.2016.05.003
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Big data applications meet various challenges in data migration in cloud computing environment.It mainly manifests in below aspects:reduce the number of network access,reduce the overall time consumption and improve the efficiency by the time of balancing the global load in the migration process and so on.Facing these challenges,it builds the problem model and descripts the dynamic migration strategy,then solves the global time consumption of data migration,the number of network access and global load balance in these three parameters.The cloud computing simulation experiment is done under Cloudsim experimental platform.The result shows that the proposed data dynamic migration strategy makes the task completion time reduced by 10% than Zipf distribution,network access number be lower than Zipf and tends to be stable.And in global load,the variance of the node’s store space is closed to zero.
  • WU Daini,WANG Xiaoming
    Computer Engineering. 2016, 42(5): 18-22,29. https://doi.org/10.3969/j.issn.1000-3428.2016.05.004
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Most existing public encryption schemes with keyword search are only suitable for the single user setting and do not allow users to launch a fuzzy retrieval.In order to solve this problem, an encryption scheme which can meet the demand of multiple users to share data in cloud environment is proposed.This scheme based on Multi-keyword Ranked Search(MRSE),uses Lagrange function and Euclidean distance to achieve key sharing and fuzzy matching.Analysis results show that,compared with MRSE scheme,this scheme does not reduce the performance of each user with expanding multi-user query,and can achieve privacy inquiry,specified retrieval,multiple users’ inquiries and other functions.
  • SUN Lixin,ZHANG Xuzhi,Lü Haiyang
    Computer Engineering. 2016, 42(5): 23-29. https://doi.org/10.3969/j.issn.1000-3428.2016.05.005
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of NP hard optimization in the process of cloud computing Virtual Machine(VM) resource allocation,a new algorithm based on cloud computing Simulated Evolution-First Fit Decreasing(SME-FFD) is proposed.The optimal degree evaluation scheme of virtual machine resource allocation is put forward by use of the strong ability of climbing of simulated evolution,and for which the choice of virtual resource allocation,evaluation and sorting process is carried out.The FFD rule is adopted to the sort of virtual machine and physical host resource allocation to improve the efficiency and effectiveness of resource allocation.By comparing the experimental results with the CloundSim grid laboratory and Gridbus cloud simulation platform,it shows that the proposed algorithm is more than 55% of CPU usage,memory usage rate can reach more than 60%,which can improve the utilization rate of the host resources,and achieve the purpose of energy saving.
  • WANG Zhiping,LI Xiaoyong
    Computer Engineering. 2016, 42(5): 30-34,41. https://doi.org/10.3969/j.issn.1000-3428.2016.05.006
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In cloud computing environment,technologies such as dynamic migration and scaling decrease both violations of Service Level Agreement(SLA) and energy cost.But frequent migration of Virtual Machine(VM) can increase SLA violations and cause the performance degradation.To minimize SLA violations and migrations,this paper proposes a scalable real-time VM scheduling policy.It builds a scalable system with Kafka and Spark to analyze history data,predicates future load and generates migration plan.Simulation experimental result shows that the policy decreases migrations by 50% while maintaining low SLA violation rate compared with native policy of CloudSim.
  • WANG Qian,XIONG Shuming
    Computer Engineering. 2016, 42(5): 35-41. https://doi.org/10.3969/j.issn.1000-3428.2016.05.007
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to meet the secure requirements of data sharing in mobile cloud storage environment,a verifiable access control scheme based on Ciphertext-policy Attribute-based Encryption(CP-ABE) is proposed from the aspects of data security access control and integrity verification,to deal with the situations such as inadequate battery power,limited data storage and computation capacity for mobile devices.It introduces the Encryption Service Provider(ESP) and Decryption Service Provider(DSP) into the system model,and implements the security outsourcing of encryption computation by using the permission attribute.ESP generates the verifiable tag for the ciphertext,before decrypting,the data integrity verification is requested by the challenger,and completed by Cloud Service Provider(CSP) according to the verifiable tag.DSP decrypts the ciphertext for the user who requests to access data.Due to the user secret key holding only by the corresponding user,the decryption computation is outsourced in security.The results of analysis and evaluation show that the proposed scheme can reduce the computation overhead of the mobile user by outsourcing computation to other servers.
  • PEI Xin,NIE Jun,CHEN Maozheng,LI Jian
    Computer Engineering. 2016, 42(5): 42-46,53. https://doi.org/10.3969/j.issn.1000-3428.2016.05.008
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to meet the needs of huge data processing due to the multi-beam receiver,focal plane array and antenna array,a widely used astronomical correlator is designed using FPGA+CPU+GPU heterogeneous architecture based on the kernel of GPU parallel computing.It uses a strict timing FPGA for sampling and pre-processing,a parallel computing GPU for signal processing and a CPU for logic control,data storage and display.The latters two are developed on the platform of CUDA.Test results show that the stability and reliability of this correlator are very good.It can realize high precision measurement in any band within the passband by adjusting the parameters of mixing,filting and Fourier transform.
  • QUAN Hengxing,WEI Xuecai,WANG Man
    Computer Engineering. 2016, 42(5): 47-53. https://doi.org/10.3969/j.issn.1000-3428.2016.05.009
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Traditional distributed file system does not consider the situation of bottom network,so its performance of file read,write and repair in the heterogeneous network can be improved in a great degree.The next generation network architecture,Software Defined Network(SDN),decouples the packets control and forward layer,makes network resources virtualized and programmable,therefore,it can be applied to data center network.Using the dynamic data resources of bottom network,it computes the optimized data stream paths and directs the DFS data flow,enhances the performance of distributed file system.Applying SDN to the design of DFS,this paper proposes a prototype design of the distributed file system based on SDN,and tests the three basic operations,read,writing and repair.Results prove that,compared with the traditional network,the read,writing and repair performance of DFS in the SDN can be enhanced obviously,especially in big data flow and the heterogeneous network situation.
  • QI Xiangming,ZHENG Shuai,WEI Ping
    Computer Engineering. 2016, 42(5): 54-59. https://doi.org/10.3969/j.issn.1000-3428.2016.05.010
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because of huge amount of data in gene information extraction,whose real-time requirements can not be met by traditional methods with single threaded operation,the Hadoop framework is used to design the two-stage parallel computing model.The first stage is used to extract candidate gene subset,and the second stage is used to extract parallel K nearest neighbor genetic information,and it implements whole process cover of parallel computing.At the same time,in order to further reduce the computational complexity of the algorithm,the microarray data sampling method is used to reduce the amount of data processing and eliminate data redundancy.Experimental results show that the proposed algorithm has better running efficiency,inherits the extensible features of Hadoop programming model,and has strong portability.
  • LI Zhanghai,PAN Jiuhui
    Computer Engineering. 2016, 42(5): 60-65. https://doi.org/10.3969/j.issn.1000-3428.2016.05.011
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Candidate code calculation based on the functional dependencies has important uses in many aspects,such as in the snapshot differential algorithms based on the strategy of compression,in the data consistency inspection,in the data inconsistency repairmen and in the data integrity constraints.This paper discusses and analyzes the retaining ability of functional dependence for several basic operations such as selecting,excepting,unions and cartesian product.A conclusion on the relationship between the candidate code of derived relation and the candidate code of the original basic relations is presented.It proposes a corresponding algorithm to optimize the size of candidate attributes for projection,the generalized projection,cartesian product and aggregation operation that may have redundant attributes in their candidate code.An algorithm is proposed that can find out the candidate code for a given derived relation,and experiments are conducted applying into the snapshot differential algorithms.Results show that the algorithm can improve the efficiency of the incremental calculation.
  • HE Ying,XU Weihong,LI Yanglin
    Computer Engineering. 2016, 42(5): 66-70,79. https://doi.org/10.3969/j.issn.1000-3428.2016.05.012
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of slow convergence rate,high possibilities of being trapped in local optimum,and low solution accuracy in Particle Swarm Optimization(PSO) algorithm when applied to automatic software Test Case(TC) generation,this paper presents an automatic generation method of TC based on Reduced Adaptive Chaos Particle Swarm Optimization(RACPSO) algorithm.The original standard evolution equations of PSO are simplified as non-velocity evolution equation and adaptive inertia weight based on fitness value is proposed to update the position of the particles directly.Meanwhile,a particle premature convergence judgment strategy based on the fitness variance of particle swarm is used to judge the degree of PSO algorithm convergence.RACPSO increases the diversity of particles by applying chaos searching mechanism to guide the particle swarm to jump out of premature convergence as quickly as possible.Experimental results show that RACPSO has faster convergence rate and higher solving efficiency compared with Standard Particle Swarm Optimization(SPSO) algorithm and Adaptive Particle Swarm Optimization(APSO) algorithm.
  • PENG Yi,AN Hong,JIN Xu,CHENG Yichao,CHI Mengxian,SUN Sun
    Computer Engineering. 2016, 42(5): 71-79. https://doi.org/10.3969/j.issn.1000-3428.2016.05.013
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problems of poor scalability and low precision of DART simulator based on Field Programmable Gate Array(FPGA),this paper proposes a hardware-friendly distributed simulation mechanism.This mechanism uses implicit synchronization method,and replaces the centralized controller with intra-node counters and inter-node buffer queues.In this way,timing synchronization and counting can be handled by each node,which improves the simulation speed.Based on this mechanism,a Network on Chip(NoC) simulator is designed and implemented.Experimental results demonstrate that this simulator can achieve similar accuracy to widely-used BookSim simulator software ones and gain 200-fold speedup.Compared with DART simulator,it accelerates simulation speed by 21% at most and achieves better scalability.
  • XU Yanyan,LEI Yingchun,GONG Yili
    Computer Engineering. 2016, 42(5): 80-84,101. https://doi.org/10.3969/j.issn.1000-3428.2016.05.014
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Inherited from traditional disk file systems,most Distributed File Systems(DFS) organize and manage files based on objects or chunks of fixed sizes,which is expensive and works poorly for random writing or inserting.But approximately 25% of file operations from a typical user are random writing.In order to change this status,this paper puts forward a file blocking method based on variable-sized contents,and it uses Rabin fingerprint algorithm to block the file and identify it according to its contents.A new writing interface which is semantic compatible with POSIX is proposed to accurately specify the writing type and improve writing performance.It presents a novel DFS named VarFS to implement all the design proposed above through modifying the Ceph.Experimental results show that VarFS can reduce the necessary reading and writing back data of update operation,which consequently alleviates amount of data transfer,accordingly it can achieve 1~2 orders of magnitude less latency and bandwidth consumption than Ceph on random writing and bandwidth consumption.
  • ZUO Yao,LIANG Ying,XU Hongbo,HUANG Shuo
    Computer Engineering. 2016, 42(5): 85-92,107. https://doi.org/10.3969/j.issn.1000-3428.2016.05.015
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Many large scale and highly connected graphs exist in real world,and caching is an efficient way to increase visiting and querying efficiency for graph data.This paper proposes a preloaded caching strategy for large scale graph data.It includes two methods named ‘log-based’ and ‘big degree node first’,which takes advantage of the graph access locality to cache frequently accessed data.This paper designs a distributed cache framework for graph data in GolaxyGDB graph storage system,and describes the implementation process of caching strategy in it.Experimental results demonstrate that the proposed strategy can effectively improve hit ratio of complex graph querying and reduce response time.It can meet the demand of online access in practical application.
  • LIU Wei,ZHAO Yu,CHEN Rui
    Computer Engineering. 2016, 42(5): 93-101. https://doi.org/10.3969/j.issn.1000-3428.2016.05.016
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a resource optimization model on Multi-Radio Multi-Channel(MR-MC) wireless network for solving the problems of channel conflict and interference.The model uses 0-1 linear programming method to minimize network interference.The interference information between links is collected by the conflict searching tree structure in each cluster,and then the channel assignment algorithm is used to eliminate interference and optimize resources of the whole network.The optimization model is applied in the wireless Ad Hoc network,and experimental results show that this method performs much better than existing Cluster-based Channel Assignment(CCAS).It can improve the throughput by 89.5%,decrease of the channel queue length and reduce the conflict between nodes.
  • WAN Xi,CHEN Zhao,CHEN Bin,MAO Minghui
    Computer Engineering. 2016, 42(5): 102-107. https://doi.org/10.3969/j.issn.1000-3428.2016.05.017
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In this paper,the spatiotemporal chaotic sequence generation method and its advantages of performance as spreading codes are comparatively analyzed.To weaken the correlation of lattice space in One-way Coupled Map Lattice(OCML) model,a method of random lattice point intervals is proposed.In this method,space lattices are divided into several segments.The lattice coordinates of next time sequence are always randomly selected in each segment,and several chaotic sequences of good parallel architecture are produced.With this method,two short codes and one long code of Code Division Multiple Access(CDMA) system are generated,and they are simulated by Simulink to verify the feasibility of the method.The results show that the OCML model with high complexity and multi-dimensional characteristics can produce pseudo-random sequences which perform better as in spread spectrum codes in CDMA system.
  • ZHENG Haotian,JI Xinsheng,HUANG Kaizhi
    Computer Engineering. 2016, 42(5): 108-112. https://doi.org/10.3969/j.issn.1000-3428.2016.05.018
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    For the Visible Light Communication(VLC) network,existing handover decision-making algorithms for blockage case are too simple to ensure small transfer delay caused by blockage at all times.To solve the problem,this paper proposes a vertical handover decision-making algorithm based on Radial Basis Function(RBF) fuzzy neural network.The impact parameters of handover decision-making for blockage case are studied.The acquisition method of different parameters are analyzed.The three most important parameters are put into RBF fuzzy neural network for fuzzy reasoning and handover decision is made according to the exact output value.Simulation results show that the algorithm can make reasonable handover decisions,ensuring small transfer delay under different conditions and reducing about 50% handover times compared with immediate handover in the case of frequent blockage.
  • GAO Yang,HUANG Guoyong,WU Jiande
    Computer Engineering. 2016, 42(5): 113-117. https://doi.org/10.3969/j.issn.1000-3428.2016.05.019
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    A method to detect and restore single-frequency cycle-slips based on improved Local Mean Decomposition(LMD) is proposed to solve the problem that the small cycle-slip is difficult to detect and restore by high accuracy Beidou navigation satellite System(BDS) location.This method constructs the cycle-slip detectable quantity by observation value of the pseudorange and carrier-phase,and decomposes it by LMD to get a certain number of Product Function(PF) components,and the epoch where cycle-slip appears is detected accurately according to the location of the maximum point of instantaneous amplitude function.The forecasting model for time series of PF components before cycle-slip appears is established by applying the Least Squares Support Vector Machine(LS-SVM) and the cycle-slip is restored by comparing the measured value with predicted value.This method is verified by using the observation value detected,result shows that this method has no problem that wavelet function is hard to select compared with Wavelet analysis and it can detect and restore the single-frequency cycle-slips accurately.
  • WANG Shujuan,YIN Jiao
    Computer Engineering. 2016, 42(5): 118-122,129. https://doi.org/10.3969/j.issn.1000-3428.2016.05.020
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In Vehicular Ad-hoc Networks(VANETs),due to the unstable network environment,the Road Side Unit(RSU) can not transmit data quickly,efficiently,and reliably to vehicles.This paper proposes a RSU multicast retransmission algorithm based on network coding technology.After the first stage of RSU multicasts,multiple vehicular communications characteristics such as location,speed,direction and data validity are considered jointly,and the optimal network coding combination can be found in an efficient way.By multicasting this network coded packet,the data dissemination efficiency can be enhanced.Experimental results show that,compared with the none-coding and random linear network coding data retransmission algorithms,the proposed algorithm achieves great performance improvements in VANETs data dissemination that the average data download rate is increased by 4 times,the average download latency is reduced by 90% and the data distribution delay is reduced by 40% and 20% respectively.

  • LIU Tao,ZHOU Xianchun
    Computer Engineering. 2016, 42(5): 123-129. https://doi.org/10.3969/j.issn.1000-3428.2016.05.021
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of network loss caused by energy,fault and environmental factors in Wireless Sensor Network(WSN),an anti attack mechanism is designed for the sensor nodes.The operative modes of the sensor nodes are defined as working,being shut down and on monitoring channel.Combined with the connectivity and extensibility of clustered WSN,the Markov chain model is established to conduct dynamic topology analysis on the factors of energy consumption causing network damage,end-to-end available channel number,average delay of data transmission,cluster head node failure and intra-cluster node failure.According to the requirements for guaranteeing service quality and in order to enhance the real-time network survivability,the topology reconfiguration and evolution of invulnerability mechanism is established based on opportunity cluster node selection and network topology reconfiguration.Experimental results show that,this mechanism is superior to traditional non-dynamic invulnerability mechanisms,with higher resource utilization rate,lower node failure probability,and stronger network topology.
  • CHEN Chixin,ZHOU Jipeng
    Computer Engineering. 2016, 42(5): 130-133. https://doi.org/10.3969/j.issn.1000-3428.2016.05.022
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In wireless network,packet scheduling and congestion control are usually independently designed.This prevents the network resources from being efficiently utilized.In order to solve this problem,a new maximum weight scheduling algorithm based on congestion control is proposed in this paper.The algorithm computes weights of all the flows in the node,chooses the flow with the maximum weight to schedule and adjusts the transmission rate of the flow according to the network congestion situation.Simulation results show that,the algorithm can improve network throughput,achieves better fairness and reduces packet loss rate.
  • QUE Jianhua
    Computer Engineering. 2016, 42(5): 134-138,145. https://doi.org/10.3969/j.issn.1000-3428.2016.05.023
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing community detection algorithms have higher complexity and lower speed.Aiming at this problem,an adaptive community structure detection algorithm in social network is proposed to maximize modularity.This algorithm has the advantages of power-law distribution property.It is scalable for very large network and possesses approximation factors to ensure the quality of its detected community structure.To certify the proposed algorithm,this paper conducts extensive experiments on both synthesized network with known community structures and real-world traces.Experimental results show that the detection performance of the proposed algorithm is better than other adaptive methods,like FacetNet and Blondel algorithms.
  • DENG Pengfei,WANG Dan,JIA Xiangdong
    Computer Engineering. 2016, 42(5): 139-145. https://doi.org/10.3969/j.issn.1000-3428.2016.05.024
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to enhance the spectrum and energy efficiency of mobile communication system,a Full-duplex(FD) Massive Multiple-input Multiple-output(MIMO) relaying system is proposed,in which the Amplify-and-forward(AF) relaying protocol is employed.For such Massive MIMO AF relaying scheme,the paper obtains the closed-form expression of the lower bound of the achievable ergodic rate for each user pair.With the derivations,the total spectrum and energy efficiency is achieved.To obtain more insights such as loop interference suppression,the asymptotic performance is performed by considering different power-scaling schemes.Experimental results show that the small-scale fading and inter-pair interference are canceled effectively when the number of the receiving or transmitting antenna grows to infinity,such that the spectrum and energy efficiency is saturated to a constant.At the same time,the loop interference imposed by FD mode is eliminated perfectly when the transmission power at source is fixed and the one at relay is scaled by the reciprocal of the number of transmitting antennas at relay.
  • XIONG Shixun,FAN Tongrang
    Computer Engineering. 2016, 42(5): 146-150. https://doi.org/10.3969/j.issn.1000-3428.2016.05.025
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that the recommended node lacks trust reliability and dynamic changes during information interaction in the study of community recommendation system,this paper uses the recommendation algorithm of the central node and trusted node in community detection and puts forward a central node recommendation method which makes evolution by sequence and is trusted.This method uses community division to extract network center node,establishes trust mechanism for nodes,and uses the node trust to control the spread of bad information.A feedback mechanism is added to update the trusted node and improve the safety of information dissemination,thus obtaining the central node selection strategy with the characteristic of dynamic trust value feedbock.Experimental results show that,compared with the traditional trusted edge community division strategy,this strategy can avoid more rumor spreading and improve the reliability of information flow transmission.
  • LIN Yi,LIAO Qinzhi
    Computer Engineering. 2016, 42(5): 151-155,162. https://doi.org/10.3969/j.issn.1000-3428.2016.05.026
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of illegally tampering of Digital Imaging and Communications in Medicine(DICOM) file header information when it transfers on the public network,a DICOM file header information tamper detection method is proposed.The method processes the file header information as image pixels,and generates the watermark of the file header information with Hash function constructed by Message-Digest Algorithm 5(MD5).The watermark of the image is embedded in DICOM image as an reversible and invisible way.The difference between the extracted watermark and the regenerated watermark is using for tamper detection of the DICOM file header.Experimental result indicates that,this method is highly sensitive to the file header information tampering.Even 1 bit change can be detected,the certification process is simple and the accuracy is high.
  • WANG Mao,WANG Xiaoming
    Computer Engineering. 2016, 42(5): 156-162. https://doi.org/10.3969/j.issn.1000-3428.2016.05.027
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Through the analysis of robust smart card authentication scheme for multi-server architecture,it is found that this scheme does not fully realize user anonymous authentication,and exists Denial of Service(DoS) attack,smart card breach attack and other security issues.To solve these problems,this paper proposes an improved scheme.By using the random masking technology,the smart card storage key of each user is different and the user name changes randomly when users login every time,and the BAN logic analysis is performed to demonstrate the effectiveness of the improved scheme.Analysis results show that it can resist smart card breach attack and DoS attack,achieves full anonymous authentication for users,and reduces the server-side computation.
  • DU Ruiying,LI Hui,FAN Dongdong
    Computer Engineering. 2016, 42(5): 163-167,172. https://doi.org/10.3969/j.issn.1000-3428.2016.05.028
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of privacy leaking for Point of Interest(POI) query service,this paper proposes a new location privacy protection scheme.By using the group signature technology,this scheme not only can protect the user’s privacy of identity,but also can provide an efficient method to verify the identity of the user for Service Provider(SP).With the aid of Partially Trusted Query Agent(PTQA),it can also solve the leak problem of network address,which can be used to identify the actual user.It reduces the cost of anonymous area construction by fixing the points used by query.Experimental results show that it brings less stress on mobile terminals and has smaller communication overhead compared with Dummy Location Selection(DLS) scheme.
  • HU Baoan,LI Bing,LI Yaling
    Computer Engineering. 2016, 42(5): 168-172. https://doi.org/10.3969/j.issn.1000-3428.2016.05.029
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In reality,the network virus is latent and the probabilities of removing different state nodes from the network are not equal.According to the theory of transmission dynamics,a network SIR virus model with time delay is established with direct immunity and different removal rate.Based on the theory of delay differential equations about stability,this paper analyzes the dynamic behavior of the model,and provides theoretical basis to effectively control and eliminate the spread of computer virus in the network.After analyzing the time delay effect on the solution of the model,it is illustrated that the improvement of virus detection technology is necessary,and the proposed model can effectively isolate the infected nodes in the process of controlling virus transmission in the network.
  • ZHANG Fengbin,GE Haiyang,YANG Ze
    Computer Engineering. 2016, 42(5): 173-178,185. https://doi.org/10.3969/j.issn.1000-3428.2016.05.030
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To deal with the problem of slow data processing speed and poor timeliness of immune intrusion detection,non-negative matrix factorization by Bregman iteration is proposed.It improves the traditional method,changes matrix iteration process,and uses matrix location to realize the decomposition conditions and its constraint,better retention of the internal structure of the data and acceleration of the processing.Experimental results in KDD CUP 1999 datasets show that the approach can improve the speed of intrusion detection and enhance the timeliness of immune intrusion detection.
  • LU Zengxin,QU Dapeng,FAN Tiesheng
    Computer Engineering. 2016, 42(5): 179-185. https://doi.org/10.3969/j.issn.1000-3428.2016.05.031
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Existing image scrambling evaluation methods usually are closely associated with pixel location,and are vulnerable to the influence of intentional and unintentional attacks,such as shear and rotation,and have a high error.To solve this problem,an image scrambling degree evaluation method based on lifting wavelet,Manhattan distance,and texture is proposed.The images before and after scrambling are transformed by lifting wavelet respectively;the statistical distance of corresponding wavelet coefficient is calculated;the Gray Level Co-occurrence Matrix (GLCM) is generated with high-frequency;the texture features are extracted;and the scrambling degree is obtained by distance and texture.Experimental results show that,compared with the existing scrambling methods based on location,this method has a wider application range,is more consistent with the subjective evaluation,less reliant on the original image,and more effective in scrambling degree evaluating.

  • LU Liangfeng,XIE Zhijun,YE Hongwu
    Computer Engineering. 2016, 42(5): 186-193. https://doi.org/10.3969/j.issn.1000-3428.2016.05.032
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Combining RGB image and depth image can effectively improve the RGB-D image recognition accuracy.However,prior researchers only do simple linear connect with the RGB image and depth features and do not extract and fuse the RGB and depth features according to their difference,and do not take full advantage of RGB-D image.This paper proposes a multi-model sparse auto encoder algorithm.Multi-model sparse auto encoder algorithm can extract and fuse the RGB and depth features at the same time.By combining multi-model sparse auto encoder algorithms with spatial pyramid max pooling algorithms,it proposes a new deep learning model.New depth learning model can extract recognizable features and complete the RGB-D based object recognition.It uses two standard RGB-D databases to verify the new proposed algorithm and deep learning model.Experimental results show that compared with previous RGB-D image based object recognition algorithm,the newly proposed algorithm effectively fuses the RGB and depth features and achieves higher recognition accuracy.
  • MO Yuanyuan,PAN Litong,YAN Xin,YU Zhengtao,LIU Xiaohui
    Computer Engineering. 2016, 42(5): 194-200. https://doi.org/10.3969/j.issn.1000-3428.2016.05.033
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because of the isomerism and complexity of parallel pages,how to automatically and effectively get the bilingual parallel pages and how to improve the quality of them are key issues for constructing a parallel corpus.Take example for mining Khmer-English parallel pages,maximum entropy model is implemented to improve parallel pages extraction method.It regards the recognition of parallel pages as the classification of candidate pages.The method is used to find candidate pages based on cosine similarity or database query.The maximum entropy model is trained with the features based on page contents and cosine similarity among candidate pages.Parallel pages are recognized by the classifier.In terms of feature selection,not only the feature of structure,vocabulary and HTML Tag,but also document vector similarity computed by TF-IDF algorithm and cosine similarity is used.Experimental results show this method gains a recall rate of 98% and a precision rate of 94% when collecting parallel pages.
  • ZHANG Hao,CHEN Lifei,GUO Gongde
    Computer Engineering. 2016, 42(5): 201-206,212. https://doi.org/10.3969/j.issn.1000-3428.2016.05.034
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Existing sequence similarity measurement algorithms only consider the local similarity of subsequences,ignoring global structure information.Thus,a similarity measurement method based on the entropy of single symbol for sequences is proposed.The entropy of a symbol is computed according to the positions and numbers of all the same symbols in a sequence.Through verifying the validity of the new sequence similarity measurement method by agglomerative hierarchical clustering,experimental results on a plurality of datasets show that,compared with the existing methods based on local similarity of substring,the new similarity measurement method can improve the clustering accuracy significantly.
  • JING Weipeng,ZHANG Xingge
    Computer Engineering. 2016, 42(5): 207-212. https://doi.org/10.3969/j.issn.1000-3428.2016.05.035
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of low efficiency to extract the key speech feature in the pooling layer of the current Convolutional Neural Network(CNN) model,a Dynamic Adaptive Pooling(DA-Pooling) algorithm based on POWER8 architecture is proposed.The algorithm implements a CNN model on the deep learning tool called Caffe.The implementation method is as follows:taking filter bank features by means of the convolutional operation as input firstly,extracting local adjacent acoustic characteristic data,calculating the Spearman correlation coefficient of the extracted data to determine data correlation,making appropriate the pooling algorithm for different correlation of data according to weight.The DA-Pooling algorithm is based on the POWER8’s high-performance processing platform which has high efficient floating-point arithmetic unit and multi thread parallel technology to improve the efficiency of processing massive data.Experimental result shows that DA-Pooling algorithm can improve the recognition accuracy of the key speech data compared with the popular Pooling algorithm,and thereby improve the stability of speech signal recognition in the entire CNN.
  • LI Hui,SHI Zhao,YI Junkai
    Computer Engineering. 2016, 42(5): 213-217,223. https://doi.org/10.3969/j.issn.1000-3428.2016.05.036
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The accuracy of final recommendation results is always affected by less active evaluation information of webpage texts which comes from users.Therefore,a secondary clustering recommendation algorithm based on information entropy is proposed.By extracting the feature words and calculating the corresponding weights,the information entropy value of each webpage text is browsed by users and the nearest entropy difference is got,and the threshold value of the nearest entropy difference is determined by using the continuous random variable of uniform distribution.With the help of the average entropy value approximation,the initial cluster numbers and hearts of the secondary clustering are cleared.The number of the recommendation results is obtained by the logarithmic function fitting.The recommended contents are determined by twice text clustering,combining with Euclidean distance and the information entropy value.Experimental results show that the recommendation algorithm is stable during the real system operation and improves the accuracy of final recommendation results compared with the secondary clustering recommendation algorithm without information entropy.
  • BI Jiajia,ZHANG Jing
    Computer Engineering. 2016, 42(5): 218-223. https://doi.org/10.3969/j.issn.1000-3428.2016.05.037
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem that background tables have different contribution degree to classification tasks in relation database,this paper proposes a multi-relational naive Bayesian classification based on relation selection.It conducts two rounds of cuts for multi-relational tables,removes part of the relations with little contribution to classification according to their maximum information gain ratio,defines the average information gain ratio as the measure of the contribution of the table,and selects the final relations among the rest relations for classification according to their contribution.Experimental results show that this algorithm can improve classification accuracy effectively.Compared with Graph-NB,Classify_tables and MNBC-W algorithms,the average accuracy rates are improved by 2.2%,1.1%,0.86%.
  • QI Le,ZHANG Xiaogang,YAO Hang
    Computer Engineering. 2016, 42(5): 224-229. https://doi.org/10.3969/j.issn.1000-3428.2016.05.038
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Outdoor image or video often suffers from blurring and color deviation due to the atmospheric haze.This has a bad impact on the stability of outdoor video system.Because of high computational complexity of existing dehazing algorithms,it is difficult to achieve video dehazing by only relying on software.Aiming at the present situation,this paper analyses the computing bottleneck of dark channel prior dehazing algorithm,and uses High Level Synthesis(HLS) tool for the hardware of dehazing algorithm.The dehazing algorithm runs on a Field Programmable Gate Array(FPGA) using pipelining technique.Experimental results show that,for 1080P real-time scene,under the premise of ensuring the quality of dehazing,the processing speed can reach above 45 frames per second,fully meeting the needs of processing high-definition video.
  • SHEN Songyan,CHEN Ying
    Computer Engineering. 2016, 42(5): 230-234. https://doi.org/10.3969/j.issn.1000-3428.2016.05.039
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To solve the problem of fault tracking result from rapid movement and severe deformation,a contour tracking algorithm is proposed.The training samples are generated by cyclic shifts of the current tracking area with a cyclic matrix,which is used for kernel correlation based regression training.According to regression model of last frame,correlation map between target and test area in frequency domain is calculated,and returns to spatial space to form a target position maps.The maps are fused with the gray image of the test frame to establish a contour confident map.Target contour is extracted with active contour model by taking contour confident map as auxiliary information.When the contour distortion is detected by a designed distortion evaluation scheme,the contour will be modified in the next frame.The tracking result is then feedback to the kernalized correlation filter,and helps to update the tracking template.Experimental results show that in various tracking cases,the proposed method achieves more accurate objects position and contour tracking results than other state-of-the-arts methods with better robustness.
  • ZHUANG Yulin
    Computer Engineering. 2016, 42(5): 235-238. https://doi.org/10.3969/j.issn.1000-3428.2016.05.040
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Remote sensing images usually have low contrast.In order to deal with this problem,a remote sensing image contrast enhancement algorithm based on optimizing histogram is proposed.The Histogram Equalization(HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm are taken to equate the input remote sensing images respectively.Two histograms with global and local characteristics are obtained.After choosing the regularization parameter,the optimized histogram can be calculated by optimizing the objective function.The final results can be obtained by using the traditional histogram specification technique to enhance contrast of optimized histogram.Experiment results show that the algorithm can better enhance the contrast and content identification of remote sensing images compared with the classic image contrast enhancement algorithm,and obvious improvement can be achieved on the subjective visual effect and details.
  • LI Zhiming
    Computer Engineering. 2016, 42(5): 239-243,248. https://doi.org/10.3969/j.issn.1000-3428.2016.05.041
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In view of iris liveness detection of feature extraction,this paper proposes an iris liveness detection algorithm based on deep Convolutional Neural Network(CNN).Three modes of iris regions including normalization,block normalization and cutting directly are used to preprocess iris image,and they are suggested as the input of CNN for extracting features,then genuine and fake irises are identified with trained classifier.Experimental results show that this algorithm can learn the hidden characteristics of iris image automatically,make it more discriminative between genuine and fake iris feature,and it achieves above 96.72% accuracy on ND-Contact and CASIA-Iris-Fake database.
  • XU Di,LIU Yang,WANG Xiujin
    Computer Engineering. 2016, 42(5): 244-248. https://doi.org/10.3969/j.issn.1000-3428.2016.05.042
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    When edge detection and image segmentation are directly applied to the extraction of mural strokes,they usually generate two responses to a stroke,and the strokes in their results are generally a single pixel wide,unable to not obtain strokes with accurate location and original mural style.In order to solve this problem,a method combining grayscale information with edge information is proposed.A Gaussian blur based high frequency improvement filtering is used to simplify the histogram of the image’s background.Strokes which are extracted by threshold segmentation and edge detection separately are integrated to obtain the complete line drawing.Vector quantization is provided to make the stroke’s edge smooth.Experimental results indicate that when the gray value of strokes is less than that of background around strokes,compared with Canny detector,gPb detector, and other edge detection methods,the proposed method can locate strokes accurately,and strokes have a certain width,which can reflect the original mural style.
  • CHEN Yong,WU Xiaomin,YANG Jian,XI Hongsheng
    Computer Engineering. 2016, 42(5): 249-252,257. https://doi.org/10.3969/j.issn.1000-3428.2016.05.043
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the defects of H.264 such as high complexity and large amounts of computation,this paper designs and implements the CPU+GPU heterogeneous parallel H.264 decoder based on Compute Unified Device Architecture(CUDA),which makes full use of GPU’s parallel computing ability and CPU’s logic control advantages,to improve the running speed and decoding performance.Experimental result shows that the performance is improved 2~7 times when using GPU acceleration,and the each parallel module can also accelerate 5 to 11 times,compared with that of the traditional serial decoder in FFmpeg.
  • SHEN Kefan,WANG Zhongyuan
    Computer Engineering. 2016, 42(5): 253-257. https://doi.org/10.3969/j.issn.1000-3428.2016.05.044
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Through the research on the transcoding with enhancement for the low light videos,a fast mode decision algorithm is proposed.Aiming at the unclear visibility of the low light videos,it uses the Multi-scale Retinex with Color Restoration (MSRCR) algorithm to enhance the video’s quality.The proposed algorithm can speed up the mode selection by using the decoded original mode types,and analyzes its internal texture variation to quickly select the encode mode.In addition,it uses the motion region detection technology to judge the original SKIP mode.Simulation results indicate that this algorithm not only can save nearly 70% of transcoding time by reducing the computational complexity,but also can have no noticeable loss of the rate-distortion performance,compared with the cascaded decoder-encoder algorithm.
  • MA Ji,LIU Rui,ZHANG Jianxia
    Computer Engineering. 2016, 42(5): 258-262. https://doi.org/10.3969/j.issn.1000-3428.2016.05.045
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the problem of high error rate in the reconstruction of motion sequences according to key frames,this paper proposes an improved t-SNE dimension reduction algorithm to extract key frames of human motion data.The proposed t-SNE algorithm is used to reduce the dimension of original motion data.A low dimensional characteristic curve can be obtained by changing the computing method of function width parameter for t-SNE.The local maximum and minimum values of the low dimensional characteristic curve are adopted as the initial key frames,and the final key frame sequence can be extracted based on the curve amplitude algorithm.Experimental results show that under the same ratio of compression,compared with other key frame extraction methods,the proposed method has lower reconstruction error ratio.
  • WU Liangdi,FENG Gui
    Computer Engineering. 2016, 42(5): 263-268,274. https://doi.org/10.3969/j.issn.1000-3428.2016.05.046
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To reduce the intra encoding complexity of High Efficiency Video Coding(HEVC),this paper proposes an algorithm which combines fast mode decision and Transformation Unit(TU) size decision.In the novelty of the proposed fast mode decision algorithm,it reduces the number of prediction direction candidates based on the texture of Prediction Unit(PU),and skips Rate Distrotion Optimization(RDO) for some specific modes based on first candidates set of RDO.In TU size decision algorithm,TU split is early terminated based on the texture homogeneity of residual block.In HM10.1,experimental results demonstrate that the proposed algorithm achieves on average 30.7% total encoding time reduction with 1.40% Bjontedgaard Delta Bitrate(BDBR) increase.
  • WEI Hao,CHEN Huafeng,CHEN Jun
    Computer Engineering. 2016, 42(5): 269-274. https://doi.org/10.3969/j.issn.1000-3428.2016.05.047
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    City surveillance camera network provides a powerful support for finding criminal suspects and tracing their routes.In order to benefit from the surveillance camera network,and optimize the placement of surveillance camera network to improve efficiency and reduce costs,the coverage models of omnidirectional camera and directional camera are discussed,and city surveillance camera network model based on traffic road coverage is proposed.In particular,an optimal camera placement scheme based on minimum vertex cover computed by improved greedy algorithm is proposed.Experimental results show the superiority of improved greedy algorithm over greedy algorithm and mixed greedy algorithm and demonstrate the effectiveness of the optimal placement scheme with a real city traffic road graph.
  • FENG Yunjie,TU Weiping
    Computer Engineering. 2016, 42(5): 275-281. https://doi.org/10.3969/j.issn.1000-3428.2016.05.048
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    The mobile scheduling system is widely used in many industries and speech hands-free communication plays an important role in it.But as speech hands-free communication is most likely to be affected by echo and noise,many of existing mobile scheduling terminals do not support this function and it is impossible to realize audio and video communication simultaneously.Aiming at the above problem,a front-end processing sub-system,which consists of an echo-cancelling module and a noise-suppressing module,is proposed with the consideration of data processing ability of mobile terminal,which is based on Normalized Least Mean Square(NLMS) adaptive algorithm and Minimum Mean Square Error(MMSE)-Short Time Spectral Amplitude(STSA) spectral subtraction.Speech signals recorded on the mobile terminal are processed by the front-end processing modules before coding.Test results show that the front-end speech processing sub-system proposed notably improves the quality of voice communication in hands-free communication mode in the low computational compexity.
  • LIN Weijian,WANG Lunyao,XIA Yinshui
    Computer Engineering. 2016, 42(5): 282-287. https://doi.org/10.3969/j.issn.1000-3428.2016.05.049
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To cope with the problem that exists in the area optimization of Finite State Machine(FSM) by only using Traditional Boolean (TB) logic,a novel algorithm for FSM area optimization using both traditional Boolean logic and Reed-Muller (RM) logic,namely dual logic,is proposed.By using the bitwise operation between the disjoint product terms,the logic cover is divided into two parts which are suitable for TB logic synthesis and RM logic synthesis respectively.The number of the product terms of TB logic and the literals of RM logic is used to form a cost function which helps the genetic algorithm to finish the state assignment for the FSM circuit area optimization by using dual logic.The proposed algorithm is tested under the MCNC benchmarks.Experimental results show that in contrast with the method which only employs TB logic,the algorithm can further reduce the area of 80% of the test circuits.
  • CHEN Jinchao,DU Chenglie
    Computer Engineering. 2016, 42(5): 288-291. https://doi.org/10.3969/j.issn.1000-3428.2016.05.050
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of schedulability test for strictly periodic tasks in theoretical study of system scheduling,this paper researches the interference relationship between strictly periodic tasks in a real-time system,and presents an eigentask-based schedulability test method.It analyzes the time constraints of strictly periodic tasks running on a uniprocessor platform,calculates all free time,and provides a sufficient and necessary schedulability test condition by determining whether enough continuous free time is available for the task’s execution.Experimental results show that,compared with Eigen-mapping Task Assignment(EMTA) method,the proposed method can reduce the time consumption required for the test and improve the success ratio.It has better schedulability test performance.

  • ZHONG Guisen,YI Qingming,SHI Min
    Computer Engineering. 2016, 42(5): 292-296,303. https://doi.org/10.3969/j.issn.1000-3428.2016.05.051
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the problem that the existing parallel Cyclic Redundancy Check(CRC) codec for 10 Gigabit Ethernet cannot take into account both computing speed and resource usage,a new type of parallel CRC codec for 10 Gigabit Ethernet is designed.Encoding preprocessing can easily solve the problem of the CRC encoding brought by variable length byte,so that the CRC encoding circuit can be designed simply.Decoding preprocessing can separate the Frame Check Sequence(FCS) field from the Ethernet frame and restore the output data of the encoding preprocessor,which can simplify the CRC verifying circuit design.The traditional XOR circuit is optimized in the implementation of CRC encoding and verifying,which can reduce gate delay and improve computation speed.It can also switch the CRC encoding and verifying method automatically for compatibility with the existing Ethernet.Experimental results show that the proposed method not only occupies less logical resource,but also has faster computation speed and realizes real-time output compared with other three methods.Meanwhile,it also satisfies the 156.25 MHz timing requirements for 10 Gigabit Ethernet.
  • LI Xuesi,SHI Haobin,ZHANG Shuge,CHEN Xuanwen,WANG Cong
    Computer Engineering. 2016, 42(5): 297-303. https://doi.org/10.3969/j.issn.1000-3428.2016.05.052
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    According to the problems of complex model,too many manually specified parameters and huge computation in the existing gait planning method of humanoid robot,this paper puts forward a gait planning method of humanoid robot based on improved Genetic Algorithm(GA).It gives the evaluation function of dynamic gait stability of humanoid robot based on the stability principle of zero moment point,and then divides omnidirectional gait into two separate component movement ——straight walking and pose rotation.Furthermore,it uses cubic spline interpolation to plan the straight gait trajectory and gives the calculating method of resultant movement considering the pose rotation.At last,it optimizes the parameters by the improved genetic algorithm taking the gait stability and speed as goal.Experimental results demonstrate that this method achieves faster walking in premise of high gait stability of humanoid robot.
  • ZHOU Yi,LIU Hangtian,DAI Zibin,ZHANG Lichao
    Computer Engineering. 2016, 42(5): 304-307,312. https://doi.org/10.3969/j.issn.1000-3428.2016.05.053
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    Because USB3.0 can satisfy the demand of high-speed peripheral for the bus bandwidth and transmission rate,USB3.0 interface is widely used in high-speed peripheral.The verification of the USB3.0 controller becomes an important part of the function verification of System on Chip(SoC).On the basis of analyzing the structure of USB3.0 Verification Intellectual Property(VIP),this paper mainly studies the VIP protocol stack and configurable and extensible design of it,mainly including protocol layer,link layer,physical layer,configuration class and callback class.Compared with the directional test method,the proposed protocol stack can greatly improve the efficiency of large-scale functional verification.
  • LI Tiantian,LU Gang,XU Nanshan,GUO Junxia
    Computer Engineering. 2016, 42(5): 308-312. https://doi.org/10.3969/j.issn.1000-3428.2016.05.054
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to optimize the visual layout effect of complex network while they become larger,by combining force-directed algorithm and k-core concept,this paper proposes an improved compressing lay algorithm for large complex network.The nodes are divided into various categories by k-core concept in a complex network,and they are handled in different ways by different k-core values.In this way,the scale of network nodes is decreased.In addition,this paper gives the definition of the compression oriented complex network information,which is used to measure the compressing effect quantitatively.Experimental results show that the proposed algorithm can effectively use limited display space,reduce the overlapping phenomenon in layout result,show the structure of the network clearly,and maintain the original network information.

  • QU Qiang,LIU Zhongxuan,CHEN Bo
    Computer Engineering. 2016, 42(5): 313-316. https://doi.org/10.3969/j.issn.1000-3428.2016.05.055
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    In order to improve the accuracy and reliability of multi-sensor data fusion,a modified reciprocal fuzzy neartude based approach to calculate the weights of the fusion model is proposed.Through the research of the fusion model,it is found that the fuzzy neartude is more practical than fuzzy membership for the calculation of weights.The fusion performance of five types of fuzzy neartude is analyzed,and the reciprocal fuzzy neartude is proved to be one of the best as for the resolution and amount of calculation.However,it can not suppress the singular data well.To address the problem,the reciprocal fuzzy neartude is modified in order to improve the operability of the algorithm.Simulation analysis shows that compared with other fuzzy neartude methods,the modified reciprocal fuzzy neartude based approach can fuse the multi-sensor data with high accuracy and reliability.
  • SUN Xiaowen,SUN Ziwen,QIN Fang
    Computer Engineering. 2016, 42(5): 317-321. https://doi.org/10.3969/j.issn.1000-3428.2016.05.056
    Abstract ( ) Download PDF ( )   Knowledge map   Save
    To improve the accuracy in human fall detection,a fall detection algorithm based on acceleration sensor in a smart phone is proposed.The acceleration information of human movement is collected by a smart phone,and a mixed method is utilized by combining threshold classification with pattern recognition classification to detect falls.Threshold classification is used to realize preliminary determination of human behavior.The accurate determination of human behavior is realized by using pattern recognition classification,which includes extracting inclination and slope as classification feature through Support Vector Machine(SVM) whose parameters are optimized by Particle Swarm Optimization(PSO) algorithm to detect falls.In the comparison with SVM whose parameters are not optimized and the acceleration threshold algorithm,the simulation experimental results show that the human fall detection accuracy of the proposed fall detection algorithm is higher than the comparison algorithms.