Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 February 2020, Volume 46 Issue 2
    

  • Select all
    |
  • ZHANG Enhao, CHEN Xiaohong, LIU Hong, ZHU Yulian
    Computer Engineering. 2020, 46(2): 1-10. https://doi.org/10.19678/j.issn.1000-3428.0053147
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the development of data acquisition technology and the diversification of data obtaining approaches,the obtained data often have multiple views,so the multi-view data are formed.To study the information contained in these data becomes an research objective of multi-view learning.In order to make better use of multi-view data and improve the practical application of dimension reduction algorithms,this paper conducts a research on multi-view dimension reduction algorithms.This paper first reviews multi-view data and multi-view learning,and then,on the basis of Canonical Correlation Analysis(CCA),MCCA and KCCA are reviewed as well.Moreover,the evolution of multi-view dimension reduction algorithms,from two-view data to multi-view data and from linear to nonlinear is introduced herein.Then,this paper further summarizes various multi-view dimension algorithms integrating discriminant information and nearest neighbor information,so as to have a better understanding of these algorithms.Finally,this paper analyzes the characteristics and drawbacks of the multi-view dimension reduction algorithms and proposes future research directions.
  • CHEN Liangchen, GAO Shu, LIU Baoxu, TAO Mingfeng
    Computer Engineering. 2020, 46(2): 11-20. https://doi.org/10.19678/j.issn.1000-3428.0056532
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To implement anomaly detection for a high dimensional network with mass flow data,data dimensionality should be reduced to relieve transmission and storage burdens from the system.This paper introduces network traffic anomaly detection process and dimensionality reduction ways in hig-speed network environment.Then it summarizes common features of feature in network traffic anomaly detection and latest research developments of dimensionality reduction for traffic data.Aiming at two kinds of feature dimensionality reduction ways,network traffic feature selection and network traffic feature extraction,this paper lists and classifies frequently used algorithms and describes the principles,advantages and disadvantages respectively.On this basis,this paper analyzes existing datasets and evaluation indexes used in research of dimensionality reduction.Finally,this paper discusses development directions and challenges of dimensionality reduction technologies in network traffic anomaly detection.
  • SUN Zhiyong, JI Xinsheng, YOU Wei, LI Yingle
    Computer Engineering. 2020, 46(2): 21-27,34. https://doi.org/10.19678/j.issn.1000-3428.0054337
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In 5G core network virtualization environment, the virtual machines sharing the same physical server brings a series of problems,such as Side-Channel Attack(SCA),Virtual Node Escape Attack(VNEA) and so on,causing user private information disclosure.The existing defense method based on dynamic migration of virtual machines is an effective active defense technology,but the frequent migration of virtual machines leads to some problems,such as high resource cost and low migration security.Therefore,this paper proposes a virtual machine migration method based on redundant transition.With this method,an evaluation and calculation model is established for the migration frequency of different virtual machines.On the premise of ensuring the privacy information security of virtual machines,the migration frequency is reduced.The redundant transition method is applied to part of virtual machines to cope with the security risks brought by the frequent migration of virtual machines.Experimental results show that compared with the existing virtual machine dynamic migration method,the proposed method can reduce average migration convergence time and migration cost while maintaining the same security protection effect.
  • HAN Lei, YU Zhiyong, ZHU Weiping, YU Zhiwen
    Computer Engineering. 2020, 46(2): 28-34. https://doi.org/10.19678/j.issn.1000-3428.0053543
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The inference of vehicle destination can be inaccurate in the case that only the vehicle’s starting position is available.To address this problem,this paper proposes an approach that spatiotemporally searches the video data of city road cameras to obtain more information about the route of the passing vehicle,so as to predict its destination more accurately.In order to maximize the accuracy of target vehicle destination inference under the same spatiotemporal search times,three types of indexes are designed:the probability-based single index,the probability and Gini index-based composite index,and the probability and information gain-based composite index,which are used to evaluate the utility of different spatiotemporal searches for vehicle destination.Further,the CFMM-MidQuery algorithm,the CFMM-UtilityQuery-Gini algorithm and the CFMM-UtilityQuery-Info algorithm are proposed based on the three indexes respectively.Experimental results show that spatiotemporal search can improve the accuracy of vehicle destination inference.The effect of the benefit-based composite indexes is better than that of the probability-based single index.The difference in inference accuracy is as high as 11.4% under the same spatiotemporal search times.
  • HENG Xingchen, DONG Can, LIN Kequan, XIAO Yuting
    Computer Engineering. 2020, 46(2): 35-40,47. https://doi.org/10.19678/j.issn.1000-3428.0054838
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To embrace the marketization reform of Chinese power system,this paper proposes a blockchain-based algorithm for distributed power auction transaction,so as to realize a power bidding system that supports complex forms of transactions.The algorithm divides relevant transactions into two types:offering and responding.Multiple responders are allowed to compete for an offering,and a node server will sort the responders by price to decide the winner.The order and content of transactions are verified based on sequential aggregate signature to ensure the authenticity of transactions.Also,order-preserving encryption is adopted to protect the content of a transaction,thus ensuring the confidentiality of private data.On this basis,all transactions data are stored by using blockchain technologies to ensure the transactions are tampering-resistant.Experimental results show that the proposed algorithm can improve the efficiency of transactions generation and verification,enabling quick and safe power bidding transactions.
  • ZHANG Chuting, CHANG Liang, WANG Wenkai, CHEN Hongliang, BIN Chenzhong
    Computer Engineering. 2020, 46(2): 41-47. https://doi.org/10.19678/j.issn.1000-3428.0053810
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Question answering over knowledge graph is complex in the filtering of candidate master entities of questions,and most existing models ignore the fine-grained correlation between questions and relationships.To address the problem,this paper proposes a fine-grained question answering model over knowledge graph based on BiLSTM-CRF.The model is divided into two parts: entity recognition and relationship prediction.In the entity recognition part,the model uses the BiLSTM-CRF algorithm to improve accuracy,and the N-Gram algorithm is combined with the Levenshtein Distance algorithm to simplify the filtering process of candidate master entities.In the relationship prediction part,attention mechanism and Convolutional Neural Network(CNN) are used to capture the correlation between questions and relationships at the semantic level and the word level.Experimental results on the FB2M and FB5M evaluation datasets in FreeBase show that the proposed model has higher accuracy of entity relationship pair prediction compared with existing question answering methods for a single relationship.
  • WANG Yingjie, XIE Bin, LI Ningbo
    Computer Engineering. 2020, 46(2): 48-52,58. https://doi.org/10.19678/j.issn.1000-3428.0055246
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The deep model of natural language processing rely on huge,high-quality and human-annotated dataset.In order to alleviate such dependency,this paper proposes a BERT-based natural language processing pre-trained model for Chinese technological text named ALICE.Improve Masked Language Model(MLM) and combine it with entity-level mask to boost the base model’s performance on downstream tasks,and let the learned representations fit Chinese trait much better.Experimental results show that,compared with the BERT model,ALICE model improves the classification accuracy of Chinese technological texts and the F1 value of named entity recognition by 1.2% and 0.8%,respectively.
  • HUANG Hui, LIU Yongjian, XIE Qing
    Computer Engineering. 2020, 46(2): 53-58. https://doi.org/10.19678/j.issn.1000-3428.0053734
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In Community Question Answering(CQA) websites such as Stack Overflow and Quora,the growing number of users leads to a sharp increase in the number of new questions.Traditional expert discovery methods usually establish user documents based on historical answer records and extract user text features from them,making it difficult to find appropriate experts to answer questions in time.To address this problem,an expert discovery method based on user-tag heterogeneous network is proposed.This method builds a user-tag network based on the historical answer records of users and tags attached to the questions,so as to obtain the vector representation of users.On this basis,fully connected neural network is used to extract user features and text features of questions,and their cosine similarity is compared to obtain the list of candidate experts.Experimental results on the real world dataset of stackExchange show that compared with LDA,STM,RankingSVM and QR-DSSM methods,this method has a higher index value,and can accurately find experts that can provide correct answers.
  • LIU Yujiang, FU Lijun, LIU Junming, Lü Pengfei
    Computer Engineering. 2020, 46(2): 59-64,71. https://doi.org/10.19678/j.issn.1000-3428.0053545
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the information extraction process,the nondeterministic anaphora can cause incomplete information extraction.By analyzing the only discriminate information generated by the anaphoric part,the referenced part,the surrounding information,the referenced surrounding information and the original content in the current context,the anaphora relations are judged and a multilayer attention mechanism model is constructed.The probability calculation based on attention mechanism is performed on these five parts at different levels,and the final results are used to determine whether the anaphora relations can be proved or not.With the vectorization of the anaphoric part and the referenced part,the four probability calculations on two attention layers make every training result unique before judgment.Experimental results on OntoNotes 5.0 dataset show that the F value of the proposed model is 70.1% when both overt anaphora and zero anaphora are presented.When only zero anaphora are presented,the F value is 60.7%,which is higher than the model proposed by YIN Qingyu et al.
  • SU Qing, ZHANG Jingfang, LI Xiaomei
    Computer Engineering. 2020, 46(2): 65-71. https://doi.org/10.19678/j.issn.1000-3428.0053427
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Aiming at the problem of data sparsity in traditional collaborative filtering algorithm,this paper proposes a recommended algorithm named timeSVD++ LR,which introduces the time effect attribute on the basis of the SVD++ algorithm and linear regression model.The SVD++ algorithm is used to map the user and item information fusion implicit feedback information to the implicit semantic space,and the user-item interaction is modeled as the inner product of the space.The scoring value is explained by describing the characteristics of users and items on various factors,and then the time effect is modeled to further improve the accuracy of the prediction results.At the same time,the eigenvector is constructed according to the prediction scoring matrix.The original training data is used as the input of the linear regression model,and the final cost function is optimized by gradient descent algorithm to generate the parameter vector that minimizes the value of the cost function.The eigenvector and parameter vector are brought into the prediction model to solve the prediction score.Experimental results on the MovieLens dataset show that,compared with the RSVD,SVD++ and timeSVD++ algorithms,the average absolute error and root mean square error of the proposed algorithm are lower,and its recommendation accuracy is higher.
  • XU Xiaoyuan, HUANG Li, LI Haibo
    Computer Engineering. 2020, 46(2): 72-79. https://doi.org/10.19678/j.issn.1000-3428.0053523
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the detection and recognition performance of weak tie overlapping communities,this paper proposes a community detection method based on time interaction bias influence propagation model.The target function for the model segmentation of community detection graph is designed and the load balance of the processor is optimized by applying community structure,so as to improve the solution efficiency of the model.Based on the neighborhood edge density,the approximate active edge is redefined and an influence propagation model is established,which can confirm that the users have high interaction frequency and have strong recognition performance for weak tie users.On this basis,a time interaction bias community detection method based on overlapping community detection is proposed.Experimental results show that the proposed method has high recognition accuracy and efficiency when conducting detection on overlapping communities.
  • LI Nana, HU Jianjian, GU Junhua, ZHANG Yajuan
    Computer Engineering. 2020, 46(2): 80-87,102. https://doi.org/10.19678/j.issn.1000-3428.0053625
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional Deep Belief Networks(DBN) are often limited within local optimum due to the random initialization of weights.To address the problem,this paper introduces an improved Improved Harmony Search(IHS) algorithm into traditional DBN to construct an IHS-based DBN model,called IHS-DBN.Firstly,to improve the convergence speed and local search ability of the Harmony Search(HS) algorithm,the globally adaptive method of adjusting harmony tone is used.Secondly,the reconstruction error function of DBN is taken as the optimization objective function of the IHS algorithm.Then the solution vector is iteratively optimized to find a set of more optimal initial weights for DBN training.The proposed model is tested on the MNIST dataset for talent evaluation in colleges to verify its effectiveness.Results show that the accuracy of the HIS-DBN model is improved by 3.6%,7.3% and 16.4% respectively compared with DBN,SVM and BP neural network evaluation models.
  • ZHOU Qijun, WANG Peng, WANG Wei
    Computer Engineering. 2020, 46(2): 88-95. https://doi.org/10.19678/j.issn.1000-3428.0054213
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Adjacent time series data is correlated to some extent,as it is ordered by collection time.When extracting data from a time series,users tend to read multiple successive data points rather than a single data point.Based on the data locality of time series,this paper proposes a time series index based on dynamic segmentation,called DSI.DSI sets difference and difference levels to dynamically segment time series data,and uses interval tree to quickly query segmented data blocks of unequal length.The query result set is optimized by using the hierarchical clustering algorithm.Experimental results show that DSI has higher query efficiency than existing time series query indexes.
  • LIU Zhizhong, ZHANG Zhenxing, HAI Yan, GUO Sihui, LIU Yongli
    Computer Engineering. 2020, 46(2): 96-102. https://doi.org/10.19678/j.issn.1000-3428.0053379
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the intelligent computing field,the rapid growth of available Internet services makes users increasingly dependent on services to complete various businesses,but the passive "request-response" service model seriously decreases user experience and resource utilization.To intelligently perceive user requirements and proactively recommend appropriate services to users,this paper proposes a method of active service recommendation based on user requirement prediction.The method firstly extracts user features and service features from massive data of historical services by using matrix factorization.On this basis,the extracted data is used to train the deep learning model and predict service demands of users,so as to recommend appropriate services to users.Experimental results on real data show that the proposed method has higher accuracy and stability of service recommendation than simply a matrix factorization model or deep neural network model.
  • JIA Xiaofang, SANG Guoming, QI Wenkai
    Computer Engineering. 2020, 46(2): 103-109. https://doi.org/10.19678/j.issn.1000-3428.0054147
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Collaborative filtering algorithm plays an important role in recommendation system,but its execution efficiency and ranking accuracy are both low.Alternating Least Squares(ALS) algorithm can implement parallel computing,thus improving the execution efficiency,but the time between data loading and iterative convergence of the algorithm is a bit long.Therefore,by combing the Nonlinear Conjugate Gradient(NCG) algorithm and the ALS algorithm,this paper proposes an ALS-NCG algorithm to accelerate the ALS algorithm.The performance of the ALS-NCG algorithm is evaluated in the Spark distributed data processing environment.Experimental results show that compared with the ALS algorithm,the ALS-NCG algorithm needs less iterations and time to obtain high-precision recommended ranking.
  • FAN Yuqi, ZHANG Bei, WANG Lunfei
    Computer Engineering. 2020, 46(2): 110-117. https://doi.org/10.19678/j.issn.1000-3428.0054110
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When massive data is placed in the data center,each data often has multiple copies,thus costing the service providers a huge amount of electricity fee to run and store the servers of these data copies.At the meantime,in order to ensure their consistency,the copies placed in different data centers need to be synchronized through the network between data centers,which results in high network transmission fee.Therefore,aiming at minimizing the cost of multiple data copy placement,this paper establishes a data placement model and proposes the data placement algorithm DDDP based on data group and data center division.The data is divided into multiple groups,the data center is divided into a subset of data centers according to the requirements of access delay,and the data in each data group is placed into the subset of data centers that can meet the requirements of access delay and minimize the cost of placement.Simulation results show that compared with the NPR algorithm,the DDDP algorithm can effectively reduce the placement cost of data storage in data centers.
  • YANG Fengfan, CHANG Jinfan, WANG Zheng
    Computer Engineering. 2020, 46(2): 118-125,133. https://doi.org/10.19678/j.issn.1000-3428.0054138
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    KM2A is one of the main detector arrays of Large High Altitude Air Shower Observatory(LHAASO).Nearly 7 000 detectors are evenly distributed in the experimental area of 1.3 km2.To address problems in time synchronization and data transmission for readout electronics in such a large-scale distributed high energy physical experiment,this paper proposes a data transmission method for high precision of time synchronization.This method uses TCP/IP protocol stack and White Rabbit technology to integrate clock network with data network.TCP/IP protocol stack is slimmed down to its PC communication protocol here,and can realize efficient and reliable data transmission and high-precision clock synchronization without any extra hardware added.Test results show that this method improves time synchronization precision of LHAASO KM2A readout electronics module to less than 1 ns,and ensures the reliability of data transmission.
  • ZHU Mingqiang, FU Xiaodong, LIU Li, FENG Yong, LIU Lijun
    Computer Engineering. 2020, 46(2): 126-133. https://doi.org/10.19678/j.issn.1000-3428.0053354
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Different users have different evaluation criteria and preferences for the same online service,making their ratings for services incomparable,so users cannot easily select suitable online services.To address the problem,this paper proposes an online service evaluation method based on the Slater Social choice theory.The method fills the sparse rating matrix.It compares user ratings for services to construct a directed graph with services as nodes and preference relations as directed edges.Then it judges points-to relations of all nodes in the graph based on the points-to relations between the similar set,the front set and the later set,as well as points-to relations of directed edges of internal nodes.Thus the order of all nodes can be obtained to generate a service rating result.Experimental results show that compared with Sum,Average and Copeland methods,the proposed method can better avoid a few users manipulating the ratings.The proposed method also conforms to the Slater criteria and can reflect the preference needs of most users.
  • LI Yang, CHEN Zibin, XIE Guangqiang
    Computer Engineering. 2020, 46(2): 134-140. https://doi.org/10.19678/j.issn.1000-3428.0053824
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the prediction accuracy and reduce prediction error under the same level of privacy protection,this paper proposes a differential privacy protection algorithm DiffPETs based on ExtraTrees.During the decision tree generation process,the result value of each feature is calculated according to different criteria,the feature with the highest score is selected by the exponential mechanism and noise is added on the leaf nodes through Laplace mechanism,enabling the algorithm to provide the-differential privacy protection.Then,this paper applies DiffPETs algorithm to the classification and regression analysis of decision tree.For classification tree,Gini index is selected as the availability function of index mechanism and the sensitivity of Gini index is given.For the regression tree,variance is taken as the availability function of index mechanism and the sensitivity of variance is given.Experimental results show that compared with decision tree differential privacy classification and regression algorithm,the DiffPETs algorithm can effectively reduce prediction error.
  • QIAN Hui, LI Guangqiu, WANG Lingbo, CAI Jianhui
    Computer Engineering. 2020, 46(2): 141-147,153. https://doi.org/10.19678/j.issn.1000-3428.0053779
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Aiming at Transmit Antenna Selection with Delay(TASD)/Orthogonal Space-Time Block Code(OSTBC) wireless communication system,this paper proposes a system physical layer security enhancement scheme based on the Minimum Mean Square Error(MMSE) channel predictor.The MMSE channel prediction scheme is applied to the TASD/OSTBC wireless communication system to form a channel Transmit Antenna Selection with Prediction(TASP)/OSTBC wireless communication system.The analytical expressions of secrecy outage probability,non-zero secrecy capacity probability and asymptotic secrecy outage probability over the Rayleigh block fading channel are derived.On this basis,the effects of parameters such as the number of transmit and receive antennas on the main channel,the number of eavesdropper antennas,and the normalized delay of the channel on the security performance of the system physical layer are analyzed.Numerical calculations and simulation results show that using TASP can improve the physical layer security performance of OSTBC wireless communication system.
  • CHEN Fatang, CHEN Jiatian, LI Xiu
    Computer Engineering. 2020, 46(2): 148-153. https://doi.org/10.19678/j.issn.1000-3428.0054081
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper proposes a new mapping scheme of secure transmission for Generalized Spatial Modulation(GSM) systems to improve GSM security in a time division duplex wireless communication system.The state information of legitimate channels is introduced into the mapping process.Then the active antenna combination index and constellation symbol index mapped by the space bit and the constellation bit are reselected to enhance system security.Simulation results show that the passive eavesdropper in this scheme is unable to recover the information carried by the active antenna combination index and the constellation symbol index.The initial value of the secrecy rate is increased by 184.31% compared with the scheme based on spatial modulation,demonstrating that the proposed scheme can enhance system security.
  • SHEN Guoliang, ZHAI Jiangtao, DAI Yuewei
    Computer Engineering. 2020, 46(2): 154-158,169. https://doi.org/10.19678/j.issn.1000-3428.0053783
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The network covert channel is a communication channel that establishes secret message transmission between different hosts on the network by utilizing reserved,optional or undefined fields in the network protocols.HTTP protocol,as one of the most commonly used protocols on the World Wide Web,becomes a good carrier of network covert channels.In order to effectively detect the HTTP protocol-based covert channel,this paper proposes a covert channel detection method based on Markov model.Taking Host,Connection,Accept and User-Agent as keywords,this method establishes the Markov model of data packet and calculates the state transition probability matrix of this model.The relative entropy between the data packet to be tested and the normal data packet is used to determine whether the covert channel exists or not.Experimental results show that when the abnormal data in the covert channel exceeds 70%,the detection rate of this method can reach more than 97%.
  • HUO Litian, SHAO Peinan, XU Liding, XU Jun
    Computer Engineering. 2020, 46(2): 159-169. https://doi.org/10.19678/j.issn.1000-3428.0056123
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The main goals of mimic defense is to enable Mimic Common Operating Environment(MCOE) to implement active defense against known/unknown backdoors,block sucurity threats and attacks in time,and ensure data integrity.To achieve these goals,this paper proposes the criteria of mimic resource scheduling.Based on the criteria,this paper analyzes the designing and implementation of mimic resource management services and scheduling algorithms in terms of the interaction design of mimic resource management and MCOE framework,mimic resource management,and mimic resource scheduling.This paper also constructs a heterogeneous feature classifier for software and hardware resources of mimic operating nodes,as well as a N-tuple and heterogeneous executor N-tuple based on the third-level heterogeneity.On this basis,this paper balances the resource scheduling loads,and maximizes the randomness,dynamicity and heterogeneity of N heterogeneous executors,resources on the running server node and resource objects.The correctness and effectiveness of the mimic resource management and scheduling algorithm on the cloud container cluster is verified using mimic management service instances.
  • YANG Xiaodong, PEI Xizhen, AN Faying, LI Ting, WANG Caifen
    Computer Engineering. 2020, 46(2): 170-174,182. https://doi.org/10.19678/j.issn.1000-3428.0054961
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Vehicular Ad Hoc Network(VANET) can improve security of intelligent transportation systems and traffic efficiency,but an open network communication environment makes the system vulnerable to attacks,causing various security issues.To address privacy disclosure and inefficient signature verification in VANET,this paper proposes a message authentication scheme for VANET.The scheme integrates an identity-based cryptosystem with aggregate signatures,so as to aggregating authentication of multiple messages into a short signature.Thus vehicles can rapidly assert the validity of all signatures by verifying only aggregated signatures.Analysis results show that under the random prediction model,the security of the proposed scheme can be reduced to the calculation of the difficult Diffie-Hellman problem,and it can efficiently reduce the authentication response time of vehicles to communication messages.
  • LUO Yunpeng, ZHU Nitong, MAO Ciwei, CHENG Jinxue, XU Chungen
    Computer Engineering. 2020, 46(2): 175-182. https://doi.org/10.19678/j.issn.1000-3428.0054040
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the rapid development of cloud storage technology,more and more individual users and companies store their private data in the cloud.However,most cloud platforms store data information in plaintext,resulting in problems such as privacy leakage,unauthorized access and so on.In order to improve the security of private data,this paper proposes a searchable encryption scheme.In this scheme,one-to-many file sharing is completed after connection keywords search is achieved.The index of keywords is made to avoid memorizing the location of keywords,and the secure and private file sharing is achieved without introducing a trusted third party.The verification results under the random oracle model show that the security of the scheme is based on the q-Bilinear Diffie-Hellman(q-BDH) problem.The scheme is achieved by Java programming language and the interaction between users and servers is simulated.Results show the feasibility of the proposed scheme,whose efficiency is better than those of the GSW-1,GSW-2 and FK schemes.
  • FANG Chengzhi, CHENG Youcheng, HUO Xinglong
    Computer Engineering. 2020, 46(2): 183-186. https://doi.org/10.19678/j.issn.1000-3428.0054246
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Narrow Band Internet of Things (NB-IoT),as a new generation of cellular IoT technology defined by the 3rd Generation Partnership Project (3GPP) for low-power and wide-coverage services,is one of the key technologies for achieving the Internet of Everything.Channel estimation technique is the key to whether the NB-IoT terminal can accurately and effectively recover the transmitted signal.This paper conducts a study on the pilot-based downlink channel estimation algorithm in NB-IoT,and proposes a channel interpolation estimation algorithm based on Movable Least Squares (MLS).The pilot is inserted in the transmitting terminal,and the pilot point channel parameters are calculated according to the signals in the receiving terminal.After that,the concept of compact support is introduced,and the channel parameters are estimated by using the nearby sub-domains to influence the weight of the pilot points.Simulation results show that compared with the traditional linear interpolation and quadratic interpolation algorithms,the bit error rate of the proposed algorithm is lower while the computational complexity is not significantly increased.
  • HE Rongyi, WANG Xiaoqun, CHEN Kaifeng
    Computer Engineering. 2020, 46(2): 187-194. https://doi.org/10.19678/j.issn.1000-3428.0053857
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the Ultra Wide Band (UWB) distributed media access control protocol,when the DRP reserved block does not send all data frames,the relay device cannot send the received data frames from the source device in the current DRP policy until the next DRP duration reserved for the target device begins,which obviously increases the end to end latency between the source device and the target device.Therefore,this paper proposes a new reservation based routing protocol.The structure of the link feedback information element is designed and the information of data rate and transmission power level about the adjacent nodes are announced,so that all the device can obtain the data rate information about used links of adjacent devices.The intermediate device between the source device and the target device determines the optimal routing by calculating the routing cost.The target device selects the routing with the minimum link cost and determines the optimal routing between the source device and the target device according to the number and hops of media access slot.Simulation results show that the proposed protocol can reduce end to end delay and energy consumption,and improve network throughput by minimizing packet loss and collisions.
  • WANG Weipeng, LIN Qiangqiang, TU Shanshan, XIAO Chuangbai
    Computer Engineering. 2020, 46(2): 195-200. https://doi.org/10.19678/j.issn.1000-3428.0053376
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Tracking Area List(TAL) is formed by the flexible configuration of multiple Tracking Areas(TA).The introduction of TAL into 3GPP R8 can reduce the location management signaling overhead.The current TAL-based location management methods mostly generate different TALs for different users,and the computing efficiency in a massive cellular deployment environment has drastically decreased.To address this problem,based on TA planning,a TAL management method based on overlapping community detection is proposed.By counting the location updates and paging data generated by users in the tracking area,the TAL management is modeled as a graph segmentation problem,and a linear programming model is given.The overlapping community detection algorithm based on game theory is used to give the TAL structure.Experimental results show that this method can effectively reduce the location management signaling overhead in the cellular network and improve the efficiency of TAL allocation.
  • GUAN Liang, ZHENG Lin, ZHANG Wenhui
    Computer Engineering. 2020, 46(2): 201-206,213. https://doi.org/10.19678/j.issn.1000-3428.0054376
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When a filter bank is used for signal spectrum division and interpolation,user signal detection will be interfered if the spectrum bandwidth of user signals is greater than the sum of free spectrum bandwidth or the interpolation channel is highly interfered.To address the problem,this paper proposes a likelihood recovery algorithm for subspectrum deficiency.The spectrum division technology is used to construct an equivalent channel model in scenarios of subspectrum deficiency,and then the adaptive likelihood signal recovery algorithm is used to suppress interference from non-ideal channels.Simulation results show that the algorithm can effectively recover distorted signals and improve the signal detection performance in the absence of subspectrum in cognitive radio.
  • PAN Weiwei, KANG Kai, ZHANG Wuxiong, WANG Haifeng
    Computer Engineering. 2020, 46(2): 207-213. https://doi.org/10.19678/j.issn.1000-3428.0053993
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The current Wireless Fidelity(WiFi) positioning system has the problem of low cumulative accuracy when the error range is less than two meters.Therefore,this paper proposes an improved linear discriminant low dimensional combination algorithm,in which the outlier is removed according to the characteristic of WiFi signal strength data.The linear discriminant algorithm is used to arrange combinations under low dimensional conditions,and the probability values obtained are summed up.The threshold is set as the additional constraint for the new data obtained in the online positioning phase,so as to improve the accuracy of adjacent grid positioning.The actual measured data of indoor outdoor adjacent grid in real office environment under different conditions are used as the testing dataset and the results verify the effectiveness and correctness of the proposed algorithm.
  • KONG Weiquan, LIU Guangzhong
    Computer Engineering. 2020, 46(2): 214-220,229. https://doi.org/10.19678/j.issn.1000-3428.0053850
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Time synchronization algorithms for a terrestrial environment cannot be directly used in an underwater environment,as time synchronization of underwater sensors is influenced by many factors,including movement of nodes,transmission delay and energy consumption.This paper comprehensively considers the characteristics of underwater communication and proposes a cluster-based time synchronization algorithm using dual cluster heads.The algorithm clusters nodes according to their energy consumption and depth,and selects two optimal nodes for each cluster as the primary and secondary cluster heads.Then a model for node movement is introduced to reduce the calculation error caused by node mobility,and the mobile beacon node is used to complete the synchronization of cluster nodes.On this basis,dual cluster heads are used for synchronization of common nodes,considering the influence of the dynamically changing sound speed on synchronization performance.Simulation results show that the proposed algorithm has lower energy consumption and a higher synchronization precision than TSHL,MU-Sync,multi-hop and D-Sync algorithms.
  • DANG Xiaochao, LI Yuexia, HAO Zhanjun, ZHANG Tong
    Computer Engineering. 2020, 46(2): 221-229. https://doi.org/10.19678/j.issn.1000-3428.0053812
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the K-barrier coverage problem of Wireless Sensor Network (WSN) in 3D environment,this paper proposes the 3D-ACO,an improved ant colony optimization algorithm.The 3D surface is mapped to a 2D plane for meshing generation.The spatial weight and deployment direction angle are introduced by mesh gradient to improve the ant colony algorithm,thus finding the shortest path to construct the barrier.The mobile nodes are used to fill the gaps between the barriers,so as to ensure the constructed barriers are strong barriers.Experimental results show that compared with the strong optimal and strong greedy algorithms,the proposed algorithm can effectively improve the utilization of nodes while reducing the energy consumption of nodes.Besides,the barrier coverage constructed in 3D environment has strong adaptability.
  • YIN Yanqing, GONG Huajun, WANG Xinhua
    Computer Engineering. 2020, 46(2): 230-234. https://doi.org/10.19678/j.issn.1000-3428.0053584
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Despite the outstanding performance of deep neural network in object detection,it is hard to implement high-performance real-time object detection on embedded devices due to the complex structure and large amounts of required computation.To address the problem,this paper proposes a YOLOv3-based object detection algorithm.The algorithm uses half precision inference strategy to accelerate the inference of YOLO algorithm.Another inference strategy adaptive to video motions is also adopted to use object correlation between adjacent frames to decrease the running frequency of the deep learning algorithm,and further improve the speed of object detection.Experimental results on the ILSVRC dataset show that the proposed algorithm can implement video object detection on NVIDIA TX2 embedded platforms at a speed of 28 frame/s,and its detection accuracy is close to that of the original YOLOv3.
  • ZHANG Chuanwei, ZENG Hongjun, YANG Mengyue, LI Bo, CHEN Shangrui
    Computer Engineering. 2020, 46(2): 235-241. https://doi.org/10.19678/j.issn.1000-3428.0053684
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem of low accuracy and efficiency of single-scale feature mapping for multi-scale pedestrian detection,a multi-scale detection method based on multi-resolution filter channels is proposed.The scale perception pool is used to enhance the perception field correspondence,and the scale invariance is achieved through a soft decision tree.When using the sliding window classification strategy,the ground plane constraint and sparse grid are combined to reduce the calculation cost and form an acceleration strategy.Experimental results on the Caltech dataset show that the detection accuracy of the method is 88.89%,and the detection rate is 15.68 frame/s,its detection accuracy is higher than VJ,WordChannels and other methods.
  • XU Juan, PAN Zhenkuan, WEI Weibo, WANG Jiazhong
    Computer Engineering. 2020, 46(2): 242-249. https://doi.org/10.19678/j.issn.1000-3428.0055191
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Multiphase image segmentation is usually realized by using multiple level set functions to respectively define the characteristic functions in different regions.However,the solution to the extremum of multiphase image segmentation relies on the solution to the extrema of multiple functions,leading to low computational efficiency.Aiming at the 3D multiphase images,this paper proposes an improved variational level set model.In this model,the implicit surface evolution of an n-layer level set in a multilayer level set function is used to divide the image into n regions.Then,according to the solution to the extremum of a level set function,the 3D multiphase segmented constant valued image is rapidly segmented and reconstructed.This model also expresses the energy functional as data item and regulation item,and designs the general characteristic function of domain partition by using regularized Heaviside function.Finally,the Split-Bregman projection method is used to conduct the energy minimization solution.Experimental results show that the proposed model can effectively realize 3D multiphase image segmentation.Compared with the Chan-Vese model,this model needs less iteration steps and has faster segmentation speed.
  • ZHAI Qiang, WANG Luyang, YIN Baoqun, PENG Sifan, XING Sisi
    Computer Engineering. 2020, 46(2): 250-254,261. https://doi.org/10.19678/j.issn.1000-3428.0053842
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the problem of crowd occlusion and scale change in a single image,this paper proposes a crowd counting algorithm based on multi-column convolution neural network.The algorithm uses Convolutional Neural Network(CNN) with receptive fields of different sizes and the feature attention module to adaptively extract multi-scale crowd features.The deformable convolution is introduced to enhance the learning ability of spatial geometric deformation of the network and optimize the feature map,so as to generate a high quality density map.Experimental results on the Shanghai Tech and UCF_CC_50 datasets show that the algorithm can learn the mapping relationship between input images and crowd density maps,and has high counting accuracy and robustness.
  • ZHOU Wenjun, ZHANG Yong, WANG Yujie
    Computer Engineering. 2020, 46(2): 255-261. https://doi.org/10.19678/j.issn.1000-3428.0053447
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Gesture recognition,as a natural and harmonious way of human-computer interaction,has wide application prospects.Aiming at the problems of low accuracy and poor real-time performance of traditional gesture recognition methods,this paper proposes a real-time recognition method for static gestures based on DSSD network model.In this paper,a gesture dataset is created and the aspect ratio of the prior box is selected by the K-means algorithm and the elbow method.The transfer learning is used to solve the problem of low detection accuracy caused by small dataset.At the same time,the ResNet101 is selected as the basic network of the DSSD model according to the recognition accuracy.Then,the deconvolution module of the DSSD model fuses the semantic information of each feature extraction layer to enhance the detection ability of small gesture targets.Experimental results show that the recognition rate of the static gesture recognition of this method reaches 95.6%,which is 3.6%,4.5%,and 2.3% higher than those of the gesture recognition methods based on Faster R-CNN,YOLO,and SSD.Moreover,the detection speed of this algorithm is 8 frame/s,which can meet the requirements of real-time detections.
  • KE Pengfei, CAI Maoguo, WU Tao
    Computer Engineering. 2020, 46(2): 262-267,273. https://doi.org/10.19678/j.issn.1000-3428.0053576
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address overfitting of recognition results in complex Convolutional Neural Network(CNN) on small and medium face databases,this paper proposes a face recognition algorithm based on improved CNN and ensemble learning.Combining the characteristics of planar networks and residual networks,the improved CNN replaces the fully connected layer with the average pooling layer to make the network structure simple and highly portable.Based on this improved CNN,the voting-based ensemble learning strategy is used to implement convex combination for results of all individual learners and obtain the final result,so more accurate face recognition could be realized.Experimental results show that the recognition accuracy of the proposed algorithm reaches 98.89%,99.67% and 100% respectively on Color FERET,AR and ORL face databases with a high convergence speed.
  • LIU Tianyu, JIANG Weiwei, HE Jiangping, HAN Jincang
    Computer Engineering. 2020, 46(2): 268-273. https://doi.org/10.19678/j.issn.1000-3428.0053712
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Livers have similar gray values to surrounding organs in Computed Tomography(CT) images,and the shape of a liver varies among different patients,making precise segmentation of liver CT images a hard problem in medical image processing.To address the problem,this paper proposes a Hierarchical Contextual Cascaded Fully Convolutional Network(HC-CFCN) model to implement automated segmentation of liver CT images.The first-level network is used to realize rough segmentation of the liver contour,and the segmentation results are used as the input of the second-level network together with the original CT image and liver energy image to optimize the segmentation results.Experimental results on the LiTS dataset show that the HC-CFCN model has a higher segmentation precision than U-Net,FCN+3DCRF and V-Net models.
  • ZHAO Jun, ZHU Sui, YANG Wenjing, XU Yanhui, PANG Yu
    Computer Engineering. 2020, 46(2): 274-278,285. https://doi.org/10.19678/j.issn.1000-3428.0053565
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As an effective method of image segmentation,clustering is widely used in the field of computer vision.Compared with other clustering methods,Density Peak Clustering(DPC) has fewer parameters and can effectively identify non-spherical clustering.On this basis,this paper proposes improved DPC image segmentation algorithm by introducing the uncertainty metric entropy in information theory.The algorithm takes the CIE Lab color space values of the image pixels as feature data.By calculating the information entropy,the adaptive truncation distance is obtained to replace the empirical value.Then,the corresponding decision map is established and the total number of cluster centers is determined.Accordingly,the non-cluster center points are classified and the noise points are removed to complete image segmentation.The experimental results on the Berkeley dataset show that the algorithm can well achieve color image segmentation,and its average segmentation time and PRI index are 14.658 s and 0.721 respectively.
  • ZHOU Shuangshuang, SONG Huihui, ZHANG Kaihua, FAN Jiaqing
    Computer Engineering. 2020, 46(2): 279-285. https://doi.org/10.19678/j.issn.1000-3428.0053954
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In baseline tracker Discriminant Correlation Filter Network(DCFNet),occlusion and motion blur often cause target drift.To address the problem,this paper proposes a correlation filtering network of end to end structure,RACFNet.First,EDNet is used to obtain high-level semantic information to make up for low-level feature representation.Then both channel-wise and spatial residual attention mechanism are introduced to enable the network to extract more specific information for different tracking objects.At last,correlation filters are utilized to estimate the target locations according to the output maximum response value.Experimental results on OTB-2013 and OTB-2015 benchmarking datasets show that RACFNet runs at an average speed of 92 frame/s.The success rate of tracking is improved by 8.20% and 10.69% compared with DCFNet.
  • ROU Te, SE Chajia, CAI Rangjia
    Computer Engineering. 2020, 46(2): 286-291. https://doi.org/10.19678/j.issn.1000-3428.0053836
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Sentences are characters or words that are combined according to grammatical rules.Semantic segmentation is a decoding problem of sentence combination rules,that is,parsing the meaning of sentences.If the semantic analysis is performed directly after the Tibetan word segmentation,the granularity is too small,and word ambiguity is prone to occur.However,if the sentence is used as the analysis unit,the granularity is too large to reveal the semantics of the sentence.To this end,this paper proposes a semantic segmentation method for Tibetan sentences.The method segments sentences by semantic chunk,the length of which is between a word and a sentence.After word segmentation and labeling of the sentence,the word segmentation results are re-combined to segment the sentence into several semantic chunks.Then the dilated convolutional neural network model is used to identify the semantic chunks.Experimental results show that the accuracy of the proposed method for Tibetan sentences achieves 94.68%.
  • ZHANG Xiang, CHEN Xin
    Computer Engineering. 2020, 46(2): 292-297,303. https://doi.org/10.19678/j.issn.1000-3428.0053887
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem of sparse labeled datasets of lung CT images in actual tasks,this paper proposes a lung cancer diagnosis method that combines the U-Net self-learning model and C3D multi-task learning network.The LUNA16 dataset and DSB dataset are preprocessed to ensure consistent voxels and directions of slice images.Then the method uses the C3D multi-task learning network model to construct a lung nodule detection model,and uses 165 slice images from the LUNA16 dataset and 161 slice images from the DSB dataset to train the improved U-Net network model.The labeled samples are expanded using self-learning to construct a lung mass detection model.On this basis,the node and mass detection results are combined to obtain the final diagnosis of lung cancer.Experimental results show that the lung cancer prediction accuracy of the proposed method is 85.3%±0.3%,which is at the same level of supervised learning method.
  • QI Yongfeng, LI Longqiang
    Computer Engineering. 2020, 46(2): 298-303. https://doi.org/10.19678/j.issn.1000-3428.0053501
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to effectively detect epileptic signals in Electroencephalogram(EEG),this paper proposes a one-dimensional Local Ternary Pattern(1D-LTP) operator to extract signal features,and the features are classified by combing Principal Component Analysis(PCA) and Extreme Learning Machine(ELM).The 1D-LTP operator is used to calculate the feature-transformation code in the top-level and bottom-level modes of the signal points,so as to accurately filter out the interference signals.Then the histogram of transformation code is dimensionally reduced by PCA and classified by ELM,and the classification performance is evaluated by 10-fold cross validation.Experimental results show that the proposed method can identify EEG signals during seizures,and the recognition accuracy can reach 99.79%.
  • XU Shaofeng, PAN Wentao, XIONG Yun, ZHU Yangyong
    Computer Engineering. 2020, 46(2): 304-308,314. https://doi.org/10.19678/j.issn.1000-3428.0053873
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the process of software development,code annotation tools with good performance can improve development efficiency and reduce maintenance costs.Some researchers regard the automatic generation of code annotation as a task that translates source code into natural language annotation.They only take the sequence information of source code into consideration,while ignoring the internal structure characteristics of the code.Therefore,on the basis of the common end to end translation model,by using the code abstract syntax tree,the structure information of the source code is embedded into the encoder and decoder translation model,and a dual encoder and decoder model based on structure awareness is proposed,which comprehensively considers the sequence information of the source code and the structure features within the code.Experimental results on real datasets show that compared with the PBMT and Seq2seq models,the BLEU score of the proposed method is higher and the generated annotations are more accurate and readable.
  • MUNIRE·Muhetare, LI Xiao, YANG Yating
    Computer Engineering. 2020, 46(2): 309-314. https://doi.org/10.19678/j.issn.1000-3428.0053080
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Uyghur morphology is comparatively complex and the configuration affix plays a significant role in Uyghur,which is grammatically very different from Chinese.Aiming at the morphology characteristics of Uyghur,this paper analyzes the function of Uyghur affix in statistical machine translation from Chinese to Uyghur.A phrase-based Chinese-Uyghur statistical translation system is built to conduct comparative experiments on Chinese-Uyghur parallel corpus with different levels of granularity,such as the word level granularity,the stem level granularity,the maximum stem level granularity,the stem-affix level granularity and the stem-suffix level granularity.Then the influence of Uyghur with different granularity on words alignment quality and language model quality in Chinese-Uyghur machine translation is studied.Experimental results show that the translation quality of the stem-based and the stem-suffix based Uyghur target corpus is significantly improved.
  • LI Hui, ZHANG Nannan, CAO Zhuo, ZHENG Hai, CHEN Xiangping
    Computer Engineering. 2020, 46(2): 315-320. https://doi.org/10.19678/j.issn.1000-3428.0053521
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Terrorist attacks happen frequently in the world today.Predicting and analyzing the suspects is beneficial to find new or hidden terrorists as early as possible and launch a targeted operation against them,so as to reduce the loss of people and property.Therefore,machine learning methods are used to predict one or more suspects based on the multiple characteristics of terrorist attacks.Bayesian optimization is used to optimize four algorithms,including Bagging,decision tree,random forest and Fully Connected Neural Network(FCNN).Then,the preprocessed data is input into an optimized algorithm model to predict the suspects of terrorist attacks.The accuracy,recall,precision and F1 values are used as indicators to evaluate the performance of the algorithm.Experimental results show that,when the prediction result only outputs one suspect,the prediction result of the algorithm based on tree structure is generally good,in which,the prediction accuracy of the Bagging algorithm is 0.911 at the highest,while the FCNN can obtain the prediction results of multiple suspects,with a prediction accuracy of 0.877 8.