Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 August 2020, Volume 46 Issue 8
    

  • Select all
    |
    Hot Topics and Reviews
  • WU Jigang, LIU Tonglai, LI Jingyi, HUANG Jinyao
    Computer Engineering. 2020, 46(8): 1-13. https://doi.org/10.19678/j.issn.1000-3428.0057749
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Mobile Edge Computing(MEC) technology has been widely used in services that require high real-time performance and intensive bandwidth resources,but the heterogeneous underlying infrastructure leads to multiple security and privacy issues.The decentralization,tamper proofing and anonymity of blockchain technology,however,can provide MEC systems with a new trusted computing paradigm,which enables transaction authentication without a trusted third party and information records in distributed environment,protecting the security of data in collection,transmission,storage and computing.Focused on key issues in edge computing such as security,privacy protection and resource management,this paper reviews the applications and technology of blockchain in MEC,with regard to data security,privacy protection,identity authentication,access control,computing migration,and network management.Furthermore,this paper summarizes the ways of blockchain technology is applied in edge computing scenarios,including Content Distribution Network(CDN),smart cities,and smart healthcare,and lists the advantages of blockchain technology.Finally,this paper shows the possible research directions and applications of blockchain technology in MEC.
  • WANG Jinsong, LÜ Zhimei, ZHAO Zening, ZHANG Hongwei
    Computer Engineering. 2020, 46(8): 14-20. https://doi.org/10.19678/j.issn.1000-3428.0056589
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Bitcoin is a blockchain-based cryptocurrency,which is often used in abnormal transaction activities because of its pseudo-anonymity.Bitcoin entity recognition is often implemented by using the traditional heuristic clustering algorithm,but the algorithm does not consider the problem of result fusion after the emergence of new data.To this end,this paper proposes an incremental clustering method based on the characteristics of Bitcoin transaction data.Firstly,the block data is analysed to obtain the clustering transaction of the wallet address to form a clustering address group.Then,by looking up the address hash table,the relationships between the clustering entities are extracted.Finally,based on the union-find set algorithm,the wallet address data of the block is incrementally clustered to obtain a new Bitcoin entity relationship,and thereby to infer the type of the entity.At the same time,the entities are identified and labelled to implement visual analysis of the transaction behaviour of entities.Experimental results show that the proposed method can accurately implement incremental clustering of addresses,and displays the evolution process of Bitcoin entities.Compared with the heuristic clustering method,the proposed method has lower time complexity.
  • WANG Weimei, SHI Yimin, LI Guanyu
    Computer Engineering. 2020, 46(8): 21-26. https://doi.org/10.19678/j.issn.1000-3428.0055390
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to complete the missing relations between entities of knowledge graph,this paper proposes an improved knowledge graph completion method for capsule network.First,the triplets are presented as a 3-column matrix,which is convolved with multiple filters to produce different feature maps.Secondly,these feature maps are reconstructed into corresponding capsules,each capsule composed of a group of neurons.The capsules with a lower dimension are produced through routing operation,and then a continuous vector is generated.Finally,the vector and the weight vector are subjected to a dot product operation to construct a ranking function to determine the correctness of the given triple.Experiments for link prediction and triplet classification are carried out on the public data sets including WN18RR,FB15K-237 and FB15K.The experimental results show that the proposed algorithm outperforms DistMult,ComplEx,ConvE and other models in link prediction.Also,it outperforms TransE,TransH,TransR and other models in triple classification,increasing the accuracy to 91.5%.
  • WANG Yan, GE Haibo, FENG Anqi
    Computer Engineering. 2020, 46(8): 27-34. https://doi.org/10.19678/j.issn.1000-3428.0055845
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Mobile Edge Computing(MEC) reduces delay and energy consumption by migrating computing resources to network edge.Compared with cloud computing,edge computing has limited computing resources and cannot meet the needs of all mobile services.To address the problems,this paper proposes a computation offloading strategy for cloud-assisted mobile edge computing.The mobile service is modeled as a workflow model with a priority constraint relationship to analyze the delay and energy consumption during system operation.Then,with minimizing the total system cost(weighted sum of delay and energy consumption) as research objective,a computation offloading algorithm is designed on the basis of improved Genetic Algorithm(GA),of which the operations of coding,crossover,and mutation are partially modified.Simulation results show that compared with the All-Local algorithm,the Random algorithm,the ECGA algorithm,the total system cost of the proposed algorithm is the smallest of existing algorithms.
  • DAI Junjie, SHEN Subin
    Computer Engineering. 2020, 46(8): 35-42,49. https://doi.org/10.19678/j.issn.1000-3428.0056422
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to make full use of idle resources on various devices at the edge of network and get rid of single point failure and trust problems caused by traditional centralized management mode,it is necessary to effectively manage and allocate the resources in a decentralized mode.Blockchain,as a decentralized,tamper-proof and traceable technology,provides an ideal solution.To address the above problem,this paper proposes a blockchain-based allocation method for decentralized edge computing resources,which realizes the decentralized and trusted storage of the key information about the resource allocation.Based on the information,users can verify the validity of resource access requests from other participants.On the basis of the Ethereum platform and the intelligent contract technology it supports,the implementation scheme of the allocation method is designed.At the same time,in order to deal with the scalability problems faced by the current blockchain technology,the sidechain technique is used to realize the expansion of Ethereum.Through the establishment of Ethereum private blockchain network for simulation,test results prove the correctness and feasibility of the proposed method.
  • WANG Ruyan, LIU Yuzhe, ZHANG Puning, KANG Xuyuan, LI Xuefang
    Computer Engineering. 2020, 46(8): 43-49. https://doi.org/10.19678/j.issn.1000-3428.0055501
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address high demands for real-time performance of search of Internet of Things(IoT) entities and the time-varying feature of physical entities,this paper proposes an Edge and Cloud Collaborative Entity Search Method(ECCS) for IoT.The method takes advantages of edge computing and cloud computing to construct entity search system architecture based on edge and cloud collaboration,so as to improve the search efficiency of IoT entities.Furthermore,to address limited communication capabilities of sensors of embedded physical entities,an entity identification algorithm based on Deep Belief Network(DBN) is proposed,which stores the status information of popular entities and unpopular entities respectively on the edge server and cloud to reduce the storage and computing cost of the edge server.Simulation results demonstrate that the proposed method can effectively improve the real-time performance and accuracy of search of entity status information compared with the cloud data sharing search method(SeDaSC) and the hierarchical search method(LHPM).
  • Artificial Intelligence and Pattern Recognition
  • YANG Ruipeng, QU Dan, ZHU Shaowei, QIAN Yekui, TANG Yongwang
    Computer Engineering. 2020, 46(8): 50-57. https://doi.org/10.19678/j.issn.1000-3428.0055553
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing anomaly detection models for log sequence based on recurrent neural network perform well for shorter sequences,but underperform for long sequences.To address the problem,this paper proposes a general anomaly detection framework for log sequences based on temporal convolutional networks.By modeling the log template sequence as a natural language sequence and using word embedding based on neural network training as the input of the model,the semantic rules of the target words in the current log sequence can be represented,and the computing efficiency of the whole framework can be improved by dimension reduction.In addition,which uses ReLU with parameters to replaces ReLU and uses adaptive average pooling layer to replace fully connected layer.The anomaly detection problem of log sequence is modeled as the natural language sequence generation problem.Experimental results show that the overall accuracy of the detection framework is higher than that of TCN+Linear,TCN+AAP and other methods.
  • LIU Yuhang, MA Huifang, LIU Haijiao, YU Li
    Computer Engineering. 2020, 46(8): 58-63,71. https://doi.org/10.19678/j.issn.1000-3428.0054555
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of existing clustering algorithms for high-dimensional sparse data do not consider overlapping class clusters and outliers,resulting in unsatisfactory clustering results.Therefore,this paper proposes an overlapping subspace K-Means clustering algorithm.The computing strategy for class cluster subspace is given.The attribute subspace of each class cluster is dynamically updated in the clustering process,and a reasonable constraint function is defined to guide the clustering process,so as to realize the overlap of clusters and the control of outliers.On this basis,a reasonable objective function is defined to modify the traditional K-Means algorithm,and the weight of each dimension in each class cluster is calculated by using the entropy weight constraint.The value of weight is used to identify the relative importance of the dimensions in different class clusters.And some parameters are added to control the degree of overlap and the number of outliers.Experimental results on artificial data set and real data set show that the proposed algorithm outperforms EWKM,NEO-K-Means,OKM and other subspace clustering algorithms in terms of NMI and F1 indicators with better clustering results.
  • YANG Yong, YANG Liang, ZOU Yanbo, REN Ge, FAN Xiaochao
    Computer Engineering. 2020, 46(8): 64-71. https://doi.org/10.19678/j.issn.1000-3428.0057138
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper proposes a hierarchical attention mechanism neural network model based on the features of pronunciation,font and semantics(PFSHAN) for humor recognition,which extracts the features of English humor linguistics.During the feature extraction stage,the humor texts are presented phoneme,character and semantic information that carries ambiguity level information,and then the features of pronunciation,font and semantics of the PFSHAN model are extracted by using the Convolutional Neural Network(CNN),Bi-directional Gated Recurrent Unit(Bi-GRU),and the attention mechanism.During the feature fusion stage,as words contribute differently to the linguistic features of humors,and the linguistic features of humors are also correlated differently to sentences,the hierarchical attention mechanism is used to adjust the influence of different linguistic features on performance of the PFSHAN model.Experimental results on datasets of Puns and Onliner show that the F1 scores of the PFSHAN model are 91.03% and 91.11% respectively,significantly improving the humor recognition performance.
  • WU Qingchun, JIA Caiyan
    Computer Engineering. 2020, 46(8): 72-77,84. https://doi.org/10.19678/j.issn.1000-3428.0054954
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Recommendation systems can effectively solve the problem of information overload and provide personalized recommendation service for users.However,traditional models which generate prediction results only by analyzing user project scoring matrix are not effective in the case of sparse scoring matrix.To address the problem,this paper uses the rating information and social trust relationships of users to calculate user similarity,and on this basis proposes a Matrix Factorization(MF) recommendation model named SoRegIM fusing social relationship.By mining the topological relationship of users in social network,the weighted social trust network is constructed based on the information of direct and indirect neighbors of the target users,so as to reduce the redundant social noise while making full use of the social relationship information of users.Experimental results on the open datasets show that,compared with classical models such as SoRec and SocialMF,SoRegIM has higher recommendation accuracy and demonstrates an obvious improvement on sparse data.
  • KE Xiangmin, CHEN Jiang, LUO Guanghua
    Computer Engineering. 2020, 46(8): 78-84. https://doi.org/10.19678/j.issn.1000-3428.0054596
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Collaborative filtering recommendation algorithms recommend commodities by calculating the similarity of user behavior,but the similarity calculation of traditional collaborative filtering algorithms has a certain degree of distortion.In view of the problem,this paper proposes the concept of inverse popularity and common interest item based on the idea that the less popular items are assigned more weight.On this basis,a similarity calculation method is proposed,which reduces the weight of items with high popularity in the calculation of similarity,so as to reduce the influence of popular items on user personalization.Also,it increases the weight of the influence of the common interest number on similarity.On this basis,a new recommendation model is established to find the user set with the highest similarity to the target user.Experimental results on the MovieLens dataset show that the proposed similarity calculation method can improve the recommendation performance,and achieve a higher precision rate,recall rate and F1 score than Cosin,Pearson and Corrcosin methods.
  • ZHAO Yanan, LIU Yuan, SONG She
    Computer Engineering. 2020, 46(8): 85-92. https://doi.org/10.19678/j.issn.1000-3428.0055117
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing text sentiment analysis methods fail to efficiently capture the emotional features of the texts,which weakens the performance of sentiment analysis.To solve the problem,this paper proposes a regression model,DLMA-CNN,which uses the multi-head self-attention mechanism to learn the word dependency relationships within the sequence and capture the internal structure of the sequence.The model reuses shallow features and fuses them with multi-head self-attention features,using Convolution Neural Network(CNN) in deep learning to achieve better performance of sentiment polarity analysis in texts.Experiments on the benchmark dataset SemEval-2017 Task 5 show that the proposed model outperforms CNN,ELSTM,Att-BLSTM and other traditional machine learning algorithms in terms of sentiment polarity analysis,and can run more efficiently.
  • TANG Weitao, YU Dunhui, WEI Shiwei
    Computer Engineering. 2020, 46(8): 93-100. https://doi.org/10.19678/j.issn.1000-3428.0057570
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Focused on the issue that the correlation information between user comments is not fully made use of by the existing comment-based commodity recommendation algorithms,this paper proposes a commodity recommendation algorithm fusing with knowledge graph and user comment.The algorithm uses knowledge graph to extract commodity features and emotional words,the commodity feature set and commodity vector are constructed,and the commodity similarity matrix is calculated.Then the commodity feature score is determined according to emotional words,and the weight of a commodity feature is determined by random walking commodity nodes.On the basis of the score and weight of commodity features,the recommended value of the commodity is calculated and thus Top-k recommendation is made.In the comparison experiments with the knowledge graph-based recommendation algorithm,collaborative filtering recommendation algorithm,content-based recommendation algorithm and hybrid recommendation algorithm,results show that the proposed algorithm increases the precision by up to 15.81%,recall rate by up to 7.27%,and F-score by up to 8.55%.
  • YUAN Zheming, YANG Jingjing, CHEN Yuan
    Computer Engineering. 2020, 46(8): 101-105. https://doi.org/10.19678/j.issn.1000-3428.0055388
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As a key step of machine learning,feature selection is usually implemented by using the minimal Redundancy Maximal Relevance(mRMR) method,but the method fails to compare the correlation measure and the redundancy measure,and cannot automatically terminate the introduction of features.To address the problems,this paper proposes a feature selection method(MIC-share) based on Maximum Information Coefficient(MIC) and redundancy allocation strategy.MIC is used to measure correlation and redundancy,and the redundancy allocation strategy is used to obtain new feature scores.So the process of feature introduction can be stopped automatically,and the time required to determine the optimal subset is reduced.Simulation results show that compared with PLSR,MIFS,KNN-FABC and other feature selection methods,the proposed method reduces the Root Mean Square(RMS) error of obtained regression data,and the error rate of classification data is also reduced.
  • Cyberspace Security
  • ZHANG Yulei, SONG Tingting, ZHANG Yongjie, WANG Caifen
    Computer Engineering. 2020, 46(8): 106-111. https://doi.org/10.19678/j.issn.1000-3428.0056046
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Fail-stop group signature schemes can prevent opponents with strong compute power from counterfeiting signatures,and thereby implement privacy protection for users.On the basis of the fail-stop group signature scheme,this paper designs a fail-stop ring signature scheme using certificateless cryptosystem.In the scheme,the signature simultaneously combines strong anonymity and optional relevance with the fail-stop feature.Also,under the random oracle model,the scheme satisfies Existential UnForgeability against adaptive Chosen-Message Attacks(EUF-CMA).Performance analysis results show that the proposed scheme has less computational overhead and stronger security of key management than the existing ring signature schemes.
  • YU Qingying, WANG Yanfei, YE Zitong, ZHANG Shuanggui, CHEN Chuanming
    Computer Engineering. 2020, 46(8): 112-118. https://doi.org/10.19678/j.issn.1000-3428.0055431
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem of privacy leakage caused by trajectory sequences in trajectory data publication,this paper proposes a privacy protection algorithm,TPL-Local,based on optimized local suppression.The algorithm identifies the minimal violating sequence set and determines their suppression modes in the trajectory dataset.Then,the score table of instances in the minimal violating sequence set is constructed,and on this basis the instances are chosen and suppressed according to their scores.To implement privacy protection of trajectory data,global suppression is replaced by local suppression.By reducing the instance loss of global suppression,the data loss rate is reduced and the trajectory data availability is improved.Finally,this paper compares the data utility loss of the proposed algorithm with that of the KCL-Local algorithm on synthetic datasets,and experimental results show that the proposed algorithm can ensure the security of trajectory data while improving the data availability.
  • LU Jiajia, DU Yusong
    Computer Engineering. 2020, 46(8): 119-123. https://doi.org/10.19678/j.issn.1000-3428.0055206
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Discrete Gaussian sampling on integer is the basic operation implemented by lattice cryptosystems,and is also a determinant factor of security,but it may be subject to timing attacks and cause the leakage of secret information.To address the problem,this paper proposes a constant-time implementation method for discrete Gaussian sampling on integers based on the Knuth-Yao algorithm.The method calculates the probability matrix of a given discrete Gaussian distribution,and determines the Hamming weight of each column vector of the probability matrix,which is vectorized by SIMD in order to improve the sampling speed.Experimental results show that compared with the standard Knuth-Yao method with variable running time,which can reach 14.9×106 samples/s by adopting the vectorization technique with the support of Single Instruction and Multiple Data(SIMD).
  • ZHOU Quanxing, LI Qiuxian, FAN Meimei
    Computer Engineering. 2020, 46(8): 124-131,138. https://doi.org/10.19678/j.issn.1000-3428.0056756
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the rapid development of big data industry,the demand for delegation computation services is increasing.With the support of cloud computing,the efficiency of delegation computation is constantly improved.However,the traditional delegation computation protocols need to verify the computation results,reducing the efficiency of computing.To address the problem,this paper proposes a rational anti-collusion delegation computation protocol of three-party game based on smart contract.This paper combines the game theory with traditional delegation computation,establishes a model for the delegation computation of three-party game using the reputation mechanism,and uses the Ethereum-based blockchain technique to design the smart contract and the rational delegation computation protocol,so as to ensure the correctness of the computation results.Experimental results show that compared with the direct calculation and traditional delegation computation protocol,the proposed protocol reduces computation time consumption and increases the computation efficiency,and can make the three-party game reach a Bayesian Nash equilibrium.
  • ZHONG Zhicheng, XU Bingfeng, GU Jiugen
    Computer Engineering. 2020, 46(8): 132-138. https://doi.org/10.19678/j.issn.1000-3428.0055631
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to reduce the defense cost of Cyber Physical System(CPS) and improve the effectiveness of defense measures,this paper proposes a method based on Attack Defense Tree(ADTree) to calculate the minimal defense cost of CPS and implements a calculation tool.Firstly,the concept of atom attack defense tree(A2DTree) is proposed by adding constraints to ADTree.Secondly,the ADTree is transformed into an A2DTree by preprocessing,and the minimum defense cost is calculated by using an algebraic method.On this basis,a tool for minimum defense cost calculation is designed and implemented by Java on the Eclipse platform.The effectiveness of the method is verified by an experiment based on a typical case study of a power system.Results show that the proposed method can correctly and efficiently calculate the minimum defense cost of ADTree.
  • LIU Yali, SHI Ruifeng, REN Xiaoliang
    Computer Engineering. 2020, 46(8): 139-145,152. https://doi.org/10.19678/j.issn.1000-3428.0055059
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In Long Term Evolution(LTE) core network,message detection is used in Policy Control and Charging(PCC) system to detect packets passing through the packet data network gateway, which reduces the security threat brought by malicious users.For this reason,this paper proposes an intrusion recognition method for LTE core network based on low-complexity random packet detection.The basic architecture of the PCC system of the 3rd Generation Partnership Project(3GPP) evolved packet core is given.On this basis,the construction method of an analytical model of the random packet detection scheme is proposed.This model is used to evaluate the detection performance from the aspect of intrusion detection rate.At the same time,a random packet detection scheme is designed in order to optimize the cost of Deep Packet Inspection(DPI) implementation.Experimental results show that the model provides an effective means to set the detection rate and achieves a balance among the detection rate,the detection cost and the detection delay.
  • ZHANG Ling, ZHANG Jianwei, SANG Yongxuan, WANG Bo, HOU Zexiang
    Computer Engineering. 2020, 46(8): 146-152. https://doi.org/10.19678/j.issn.1000-3428.0057085
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional intrusion detection methods have low detection rates for Probe,U2R,R2L and other types of network intrusion attacks,leading to misdetection and missed detection of intrusion behavior.Therefore,this paper proposes an intrusion detection algorithm based on Random Forest(RF) and artificial immunity.A Random Antibody(RF) forest detection strategy is designed,and a Clone Selection Algorithm(CSA) for small sample datasets is adopted to ensure the superiority of antibodies and improve the detection rate of attacks.Then the antigen recognized as intrusive behavior is injected into the antibody set to balance the detection rate and false alarm rate of the antigen.Simulation results show that the detection rate of the proposed algorithm is 94.1%,which is higher than that of Probe (93.79%),U2R (91%) and R2L (85%).The proposed algorithm also has a low false alarm rate.
  • Mobile Internet and Communication Technology
  • LIU Ying, WANG Cong, YUAN Ying, JIANG Guojia, LIU Kezhen, WANG Cuirong
    Computer Engineering. 2020, 46(8): 153-159. https://doi.org/10.19678/j.issn.1000-3428.0055327
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Virtual network mapping is the key problem of cloud resource leasing,which can allocate the underlying hardware resources reasonably for requests of users.At present,most of the researches only focus on the profit goal of physical network and ignore its energy consumption.Therefore,this paper proposes a multi-objective virtual network mapping VNE-MOPSO algorithm combined with revenue and energy consumption.The Pareto entropy-based multi-objective optimization model is introduced to calculate the difference entropy of the two iterations.Then the evolution of population is evaluated,and the evaluation result serves as the feedback to design the dynamic adaptive particle parameter strategy,so as to solve the approximate optimal multi-objective optimization mapping scheme.Simulation results show that compared with the single target mapping algorithm,the mapping cost and energy consumption of the proposed algorithm are lower,and the average benefit is significantly improved.
  • LI Yanli, WANG Xiaonan
    Computer Engineering. 2020, 46(8): 160-163,171. https://doi.org/10.19678/j.issn.1000-3428.0055233
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem that the speed of vehicle nodes affects the stability of IPv6 vehicular network,this paper proposes an address configuration scheme for all-IP vehicular network. The scheme constructs the hierarchical structure of all-IP vehicular network and IPv6 address to enhance the stability and expansibility of network.The paper analyzes the influence of the speed of vehicle nodes on the address configuration performance,and on this basis designs a low latency address configuration scheme for IPv6 vehicular network,which enables the vehicle nodes to quickly obtain their globally unique IPv6 address.Simulation results show that compared with the existing hierarchical address configuration scheme for vehicular network,the proposed scheme reduces the delay and cost of address configuration while increasing the link stability.
  • WU Yucheng, LI Liang, MA Yunfei, LIU Tong
    Computer Engineering. 2020, 46(8): 164-171. https://doi.org/10.19678/j.issn.1000-3428.0055386
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the practical application of multi-cell multi-user massive MIMO systems,a large number of users are randomly allocated to the same sector,resulting in a decline in user communication quality or even a failure in reliable communication.To address the problem,this paper proposes a pilot allocation method based on user location information.The method takes the angle difference between users reaching the base station and the distance difference between users as a basis for allocation,and designs an interference metric function to reasonably guide the dynamic pilot allocation of all users in the cell,thereby reducing user interference within the sector.Simulation results show that compared with the full orthogonal and full multiplexing pilot allocation methods,the proposed method significantly improves the average Signal to Interference Noise Ratio(SINR) of users when the number of antennas is small.It can still maintain good overall system performance when the number of antennas increases.Also,it improves the number of user connections and ensures reliable simultaneous communication between a large number of users in the same sector.
  • WANG Miao, CAI Xiaoxia, LEI Yingke
    Computer Engineering. 2020, 46(8): 172-177,183. https://doi.org/10.19678/j.issn.1000-3428.0055502
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The characteristics of high hop rate and variable hop speed of Variable Speed Frequency Hopping(VSFH) signals make the separation of source signals more difficult.Existing underdetermined blind source separation algorithms based on Sparse Component Analysis(SCA) cannot obtain high-precision recovered signals.To solve this problem,this paper proposes an improved blind source separation algorithm for underdetermined VSFH signals.According to the weak sparsity of VSFH signals in the time frequency domain,the noise threshold is adaptively determined,and the time frequency single source point detection algorithm is improved by the noise threshold and decomposition eigenvalue to increase the estimation accuracy of the hybrid matrix.At the same time,the idea of clustering and sparse reconstruction is applied to source signal separation in order to get the signals with good sparsity.Experimental results show that the proposed algorithm can achieve 90% similarity between the recovered signal and the source signal,and the estimation accuracy of the obtained hybrid matrix is improved effectively compared with those of the traditional single source point detection algorithm.
  • LI Zheng, DING Sheng, WANG Xiaoxiao, LIU Qilie
    Computer Engineering. 2020, 46(8): 178-183. https://doi.org/10.19678/j.issn.1000-3428.0055195
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Network virtualization is an important technology to solve the problem of rigid wireless network architecture.Most mapping algorithms only consider the mapping acceptance rate of virtual networks but ignore the large amount of resource fragmentation caused by dynamic network environment,which affects the mapping results of large topological networks.Therefore,a new adjacent node grouping mapping algorithm is proposed.A new node sorting method is designed,and the interference coefficient is used to describe the interference between wireless links,so the path with the lowest interference is selected for mapping.Experimental results show that the proposed algorithm has more advantages than the classical algorithm in terms of request acceptance rate,revenue overhead ratio and success rate of accepting large topology networks.
  • DONG Xuanjiang, LI Shibao, CAI Liping, YUAN Jing
    Computer Engineering. 2020, 46(8): 184-189,196. https://doi.org/10.19678/j.issn.1000-3428.0055410
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the low efficiency caused by the long time slots of multi-tag identification in Radio Frequency Identification(RFID)-based anti-collision system,this paper proposes an anti-collision query tree algorithm based on aligning mapping and double grouping.The tags are horizontally grouped according to the number of digits of identification codes and vertically grouped according to the XOR results of identification codes.Different group labels are assigned to different groups.Then based on the contraposition mapping rules,different mapping data are obtained according to the group label and identification code.On this basis,the reader uses the mapping law to implement backward reasoning for the collision information,so as to obtain the query prefix.Finally,the collision information is grouped,decoded,pushed and popped out of the stack to complete tag identification.Simulation results show that compared with the traditional query tree algorithm,octree search algorithm,A4PQT and GBAQT algorithm,the proposed algorithm can effectively reduce the total number of time slots and improve system efficiency.
  • Computer Architecture and Software Technology
  • DAI Wei, LU Yuliang, ZHU Kailong
    Computer Engineering. 2020, 46(8): 190-196. https://doi.org/10.19678/j.issn.1000-3428.0055782
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Directed Gray-box Fuzzing(DGF) test is a kind of fuzzing test technique which can quickly generate test cases to reach a given target area of the program and find vulnerabilities,but the existing DGF technique often fail to pass the checking statements such as magic bytes,and their path coverage of the target area is not high.To address the problems,this paper proposes a DGF technique combining mixed symbolic execution.By tracking the execution path of seeds,the genetic variation of seeds is assisted by the constraint solver to generate test cases that can pass checking statements,so as to test the target area more deeply and effectively.Experimental results show that the proposed test technique can improve the coverage of the target area,and it is of high application value in patch testing and high-risk code area detection.
  • FAN Guisheng, DIAO Xuyang, YU Huiqun, CHEN Liqiong
    Computer Engineering. 2020, 46(8): 197-202,209. https://doi.org/10.19678/j.issn.1000-3428.0055054
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In cross-project software defect prediction,original datasets collected and labeled by humans are often corrupted by noisy,and large distribution differences exist between data of the source project and target project.To address the problem,this paper proposes a two-stage cross-project defect prediction method called CLNI-KMM.During the instance filtering stage,noisy instances are filtered by using the CLNI method.During the instance transfer stage,the KMM algorithm for instance transfer is used to adjust the training weights of instances in the source project.On this basis,a software defect prediction model is built by combining the training data with a small ratio of labeled instances in the target project.Experimental results show that compared with classical cross-project software defect prediction methods TCA,TNB and NNFilter,the proposed method has better prediction performance.Meanwhile,it has stronger stability.
  • WANG Lisheng, YI Peng, GU Yunjie, JIANG Yiming
    Computer Engineering. 2020, 46(8): 203-209. https://doi.org/10.19678/j.issn.1000-3428.0055793
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the congestion of flow table distribution caused by limited performance of data plane Ternary Content Addressable Memory(TCAM) in Software-Defined Network(SDN),this paper proposes a Multi-Protocol Label Switching(MPLS) algorithm based on flow redirection.The algorithm utilizes the flow redirection method of the source node switch and the MPLS label decentralization method of the path switch,and occupies a part of the data link bandwidth to alleviate the problem that the flow table update rate of TCAM is too low.Also,the system capacity is increased.Experimental results show that compared with the traditional OSPF algorithm,the proposed algorithm reduces the link load by up to 60% and the insertion delay of flow table items by nearly 90%,while increasing the system capacity by up to 200%.
  • WANG Xinyi, WANG Yaobin, LI Ling, YANG Yang, BU Deqing, LIU Zhiqin
    Computer Engineering. 2020, 46(8): 210-215,222. https://doi.org/10.19678/j.issn.1000-3428.0055295
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Effective application of Thread-Level Speculation(TLS) technology can improve the hardware resource utilization of multicore chips,and has acquired successful results in automatic parallelization of multiple serial applications.However,it lacks efficient analysis of subroutine-level thread speculation of HPEC applications.To address the problem,this paper designs an analysis mechanism for subroutine-level speculation and its core data structure.Then seven representative programs in HPEC are selected,and their maximum potential parallelism at the subroutine level is excavated.On this basis,the acceleration ratio of the programs is analyzed by combining the granularity of threads,coverage rate of parallelism,number of calls of subroutines,data dependency and source code.Analysis results show that the acceleration ratio of fdfir,svd,db and ga programs range from 2.23 to 11.31.The tdfir program works best for acceleration with the acceleration ratio reaching 221.78.For applications that include multiple calls of subroutines non-heavy data dependency,it is more suitable to adopt subroutine-level TLS technology for parallelism testing.
  • NIE Fei, LI Jian
    Computer Engineering. 2020, 46(8): 216-222. https://doi.org/10.19678/j.issn.1000-3428.0055247
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to cope with distributed display,race condition and failure detection issues of Cockpit Display System(CDS),this paper analyzes the ARINC661 specification,and on this basis proposes a distributed avionic display and control system scheme by integrating the network deduction algorithm and failure detection algorithm.The scheme implements data synchronization between display and control units by designing a resource pool for graphics widgets.Then the data recording race condition algorithm and the failure detection algorithm for display and control management are used to ensure the system reliability.Experimental results show that the proposed design scheme can effectively solve the problems of distributed display and system reliability of CDS.It also can meet the failure detection requirements of avionic distributed display and control in scenarios of different severity levels.
  • HE Wangyu, WANG Zhonghua, LI Yahui
    Computer Engineering. 2020, 46(8): 223-227,234. https://doi.org/10.19678/j.issn.1000-3428.0056142
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to realize the safe start-up of multiple cooperative embedded computers,it is necessary to embed trusted computing module in each computer,but this will bring large energy consumption and management overhead to resource-constrained embedded system.To address the problem,this paper proposes a distributed trusted measurement method combining Trusted Cryptography Module(TCM) and Virtual Trusted Cryptography Module(VTCM).The embedded computer with TCM module installed is used as the trusted base,and the VTCM and TCM module are run on other computers to verify the configuration information to complete the distributed trusted measurement,so as to realize the trusted extension in the work domain.Experimental results show that the method meets the requirements of confidentiality and integrity in the trusted start-up process of embedded computer,and has the feasibility of secure parallel start-up in embedded environment.
  • ZHANG Hao, WEI Jinghe
    Computer Engineering. 2020, 46(8): 228-234. https://doi.org/10.19678/j.issn.1000-3428.0055785
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to realize the protocol conversion and efficient communication between different IP cores of the System on Chip(SoC),an efficient PLB2AXI bus bridge design scheme is proposed.By taking advantage of the bandwidth of PLB bus and AXI bus,the address,data and control signals in PLB bus protocol are converted into corresponding signals in AXI bus protocol by introducing the mechanism of pipeline transmission and read-write overlapping transmission,so as to implement the communication between two bus protocols.The functions of PLB2AXI bus bridge is verified at the module level and FPGA system level.The results show that the master cable-bridge of this scheme can converse protocol correctly,and time consuming is only 54.41% of that of the traditional master cable-bridge,so it has a higher conversion and transmission efficiency.
  • Graphics and Image Processing
  • LI Peiyuan, HUANG Chi
    Computer Engineering. 2020, 46(8): 235-242. https://doi.org/10.19678/j.issn.1000-3428.0055607
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The purpose of classification of human protein images is to identify the localization labels such as nucleus plasma and nuclear membrane in protein organelles.To address the large scale of protein classification data sets,imbalance of multi-label categories and small differences between classes,combining CSPPNet and ensemble learning,this paper proposes a classification method for human protein images.This method constructs a CSPPNet model that combines coarse-grained identification and fine-grained identification.The feature maps generated by the first few layers of the model are added to the spatial pyramid pooling layer,and combined with the feature map generated by the later convolution of the model.The overall features and local features are used to automatically detect differences in pictures to improve the precision of fine-grained image classification,and then ensemble learning is used to further improve accuracy.The experimental results show that the accuracy and F1 value of the model are improved compared with the classic convolutional neural network(CNN).
  • FENG Yufang, YIN Hong, LU Houqing, CHENG Kai, CAO Lin, LIU Man
    Computer Engineering. 2020, 46(8): 243-249,257. https://doi.org/10.19678/j.issn.1000-3428.0055034
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Image fusion technology based on deep learning is easy to lose the shallow feature information of the network and difficult to recognize the image accurately.For this reason,this paper proposes an infrared and visible light image fusion method that uses improved Fully Convolutional Neural Network(FCN).The Non-Subsampled Shearlet Transform(NSST) is used to decompose the source image in a multi-scale and multi-directional way to generate high-frequency and low-frequency sub-band images.Then the high-frequency sub-band is input into the FCN model to extract multi-scale features,and the high-frequency sub-band feature mapping graph is generated.The maximum weighted average algorithm is used to complete the fusion of high-frequency sub-band.At the same time,the local energy and fusion strategy are used to fuse the low-frequency sub-band,and the final fusion image is obtained by implementing NSST inverse transform on the fused high frequency sub-band and low frequency sub-band.Experimental results show that compared with GFF,WLS,IFE and other methods,the fusion method provides better visual effects of fused images and evaluation results of indexes.
  • HUANG Kun, QIAN Junhao, WANG Jiangwen
    Computer Engineering. 2020, 46(8): 250-257. https://doi.org/10.19678/j.issn.1000-3428.0055349
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the real-time performance and robustness of image matching algorithms,this paper proposes a feature point matching algorithm based on improved FREAK.The algorithm simplifies the 8-layer retina model of the classical FREAK algorithm to a 5-layer one,and uses a greedy search algorithm to select 64 groups of receptive field pairs,so as to reduce the overhead of calculation and keep as much useful point pair information as possible.On this basis,a rotation-invariant Local Binary Patterns(LBP) algorithm is designed to encode every receptive field in order to increase the discriminative power of the descriptor.Experimental results show that compared with FREAK,BRISK and other algorithms,the proposed algorithm has the smallest descriptor size,and in most scenes has a higher calculation speed and accuracy,which means it is more suitable for environments with complex light changes.
  • XIE Rui, SHAO Kun, HUO Xing, MITHUN Md Masud Parvej
    Computer Engineering. 2020, 46(8): 258-263,270. https://doi.org/10.19678/j.issn.1000-3428.0055000
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Image quality assessment models evaluate image quality by extracting and analyzing image features which are consistent with the human visual system.In recent years,with the development of deep learning technologies,many image quality assessment models based on deep learning have emerged,but most of them are prone to over-fitting problem with a small scale of data.To address the problem,this paper establishes a Res-DIQaM_FR/NR image quality assessment model by improving the DIQaM_FR/NR model.The improved model uses the transfer learning method to replace the original feature extraction layers of DIQaM_FR/NR with the pre-trained ResNet50 network for image feature extraction.Also,the global average pooling layer is used to replace the fully connected layer FC-512 of DIQaM_FR/NR for regressive learning of extracted features.Experimental results show that the proposed model can reduce the complexity of DIQaM_FR/NR while deepening its network structure,and can successfully simulate the visual system of human beings with a small scale of data to accurately assess image quality.
  • ZHANG Haitao, ZHANG Meng
    Computer Engineering. 2020, 46(8): 264-270. https://doi.org/10.19678/j.issn.1000-3428.0054946
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the accuracy and robustness of the original Single Shot Multibox Detector(SSD) algorithm for small target detection,this paper proposes a SSD target detection algorithm based on channel attention mechanism.The algorithm implements global pooling on high-level feature maps on the basis of the original SSD algorithm.Then the semantic information of high-level feature maps is enhanced by using the channel attention mechanism,and the dilated convolution structure is introduced for subsampling of low-level feature maps to enlarge their receptive field,so as to improve details and location information.Finally,different levels of feature maps are fused by cascading to implement effective recognition of small objects and occluded objects.Experimental results on the PASCAL VOC dataset show that compared with the original SSD algorithm,the proposed algorithm improves the mean Average Precision(mAP) by 2.2% with a higher accuracy and robustness for small object detection.
  • Development Research and Engineering Application
  • FANG Rui, YU Junyang, DONG Lifeng
    Computer Engineering. 2020, 46(8): 271-276. https://doi.org/10.19678/j.issn.1000-3428.0055414
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    There are a lot of junk texts in the massive information of online social platforms,which hinder the normal social intercourse of people when they are widely spread.To address the problem,this paper proposes a junk text filtering model.The model uses the BERT model to extract sentence coding of the text.Then the feature of sentence coding is constructed by using the B-Feature method,and the obtained feature is further constructed as a feature matrix based on the relationship between the feature and the text.The feature matrix is processed by using a BP neural network classifier,and junk texts are detected and filtered.Experimental results show that the accuracy rate of the proposed model on text datasets of long,medium,and short length is respectively 7.8%,3.8% and 11.7% higher than that of the TFIDF-BP model,and the accuracy of the proposed model on text datasets of medium and short length is respectively 2.1% and 13.7% higher than that of the naive Bayes model,which can effectively classify and filter junk texts.
  • WANG Haoliang, LIAN Yuzhong, WANG Lili
    Computer Engineering. 2020, 46(8): 277-283. https://doi.org/10.19678/j.issn.1000-3428.0057715
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The data islands generated in the traditional smart city system construction hinder the development of the electronic license database.Therefore,this paper analyzes the traditional electronic license database and designs a decentralized electronic license database sharing and transaction system by using blockchain technology.Ethereum-based smart contract is used for on-chain and off-chain data storage and on-chain license transaction management,so as to ensure the traceability,tamper-proof,distributed storage,and the security of the transaction process of electronic license.The prototype system is built based on the Ethereum environment and is tested.Results show that the proposed tamper proof and traceable scheme of electronic license is feasible.It can meet the needs of all participants in the electronic license,and the information of each participant has a private key for decryption,which prevents information leakage.
  • ZHAO Xueyuan, ZHOU Shaolei, WANG Shuailei, YAN Shi
    Computer Engineering. 2020, 46(8): 284-289. https://doi.org/10.19678/j.issn.1000-3428.0055224
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address formation containment control of multipleUnmanned Aerial Vehicles(multi-UAV) systems with multiple leaders,this paper designs a consistent distributed controller.The formation containment control problem is transformed into a consistence problem by variable substitution.By using the special properties of Laplacian matrix and the relationship between leaders and followers,the consistence problem is simplified to a stability problem of low-order systems.The stability problem is analyzed by Lyapunov function,and on this basis the design method of controller is proposed,and the solution to the feedback matrix is given.The motion of multi-UAVs system in space is simulated,and results show that the leaders form the desired formation in space and the followers move in the convex hull formed by the leaders,which means the formation containment control is realized.
  • XU Chundong, ZHOU Jing, YING Dongwen, LONG Qinghua
    Computer Engineering. 2020, 46(8): 290-296,304. https://doi.org/10.19678/j.issn.1000-3428.0055189
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To effectively segment Heart Sound Signal(HSS),this paper proposes an adaptive segmentation method for heart sound based on Non-Stationary System Identification(NSSI),so as to extract the envelope of HSS,and smooth and broaden it.The method uses the noise reduction Signal to Noise Rate(SNR) of heavy-scale wavelet and the mean parameter of feature envelope to establish the adaptive threshold function for signal segmentation of heart sound.Also,the false segmentation points caused by noise and murmur are excluded according to the envelope and time domain features.Experimental results show that the proposed method can effectively extract the basic features of HSS,increasing the segmentation accuracy to 89.21%.Compared with the envelope segmentation method of Viola integral,improved double-threshold envelope segmentation method based on Hilbert-Huang transform and other methods,the proposed method has higher segmentation accuracy and real-time performance.
  • HE Yangyu, YAN Lei, YI Mianzhu, LI Hongxin
    Computer Engineering. 2020, 46(8): 297-304. https://doi.org/10.19678/j.issn.1000-3428.0055363
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problems of inaccurate formulation and incomplete coverage of existing methods for Laotian Named Entity Recognition(NER) in the military field,this paper proposes a method combining Conditional Random Field(CRF) and rules.By analyzing the characteristics of Laotian and domain texts,the method selects the atomic features such as the word,the part of speech,the general name,the boundary word and the dictionary to construct a combined feature template.The CRF model is trained on the self-built tagged corpus,and tested by using the test corpus.To identify wrong examples,it adds rules that can express language certainty for post-processing to improve recognition performance.Experimental results show that the final overall accuracy,recall rate and F measures of this method reach 91.49%,90.96% and 91.22% respectively,effectively improve the Laotian Named Entity Recognition(NER) in military field.
  • DU Xuewu, ZHANG Mingxin, SHA Guangtao, WU Qiuyu
    Computer Engineering. 2020, 46(8): 305-312. https://doi.org/10.19678/j.issn.1000-3428.0055538
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Fuzzy control rules are the core of fuzzy PID controllers.Optimization of fuzzy control rules usually requires specific methods to optimize and weaken the correlation between fuzzy control rules,which affects the calculation efficiency and control accuracy.Therefore,this paper proposes an Improved Bat Algorithm(IBA) to optimize fuzzy control rules on the basis of the optimization method of the Bat Algorithm(BA).The neighborhood search operator is designed by the correlation between fuzzy control rules to improve the search accuracy of BA,and then the chaos mutation operator is introduced to avoid BA from falling into the local optimum.The fuzzy PID control system is evaluated with ITAE value as the performance index.Results of the simulation show that compared with the optimization performance of the particle swarm algorithm,genetic algorithm and improved ant colony algorithm,IBA reduces the adjustment time and overshoot of the fuzzy PID controller with optimized fuzzy control rules,and improves the control accuracy and calculation efficiency.
  • YANG Ben, WANG Weiye, ZHAO Wanting, XIE Jinkui
    Computer Engineering. 2020, 46(8): 313-320. https://doi.org/10.19678/j.issn.1000-3428.0055667
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Automatic stowage planning is an import part of automatic terminal operation.As an NP-complete problem,it requires consideration of multiple factors and complex restrictions.Traditional stowage planning algorithms focus more on planning results and ignore the impact of container area scheduling on task efficiency.To improve the utilization rate of yard equipment and the rationality of stowage planning results,this paper proposes a stowage planning strategy using dynamic depth-first multi-branch search based on the stowage planning tasks arranged by the Crane Work Plan(CWP).In the offline learning phase,the state value function of the container area is obtained through learning of historical data.In the online planning phase,considering the value function and various constraints,the optimal decision for container selection is obtained by using dynamic deep multi-branch search.A simulation experiment is carried out using data of real ship in the fourth phase of the automatic container terminal project of Shanghai Ocean Port.Experimental results show that compared with the traditional greedy strategy,the proposed algorithm can reduce the turnover rate of containers and the sharing rate of double trolley by 2% to 5%,the utilization rate of yard equipment stabilized at 90%~96%.