Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 December 2020, Volume 46 Issue 12
    

  • Select all
    |
    Hot Topics and Reviews
  • TAN Minsheng, YANG Jie, DING Lin, LI Xingjian, XIA Shiying
    Computer Engineering. 2020, 46(12): 1-11. https://doi.org/10.19678/j.issn.1000-3428.0059070
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Blockchain can effectively connect the Internet of Things,5G,big data,artificial intelligence and other technologies,playing an important role in new infrastructure construction.As the core of blockchain technology,the consensus mechanism determines the consistency and correctness of the blockchain databases,and thus determines the security,extensibility and throughout of blockchain.This paper categorizes the existing consensus mechanisms of blockchain into three types of single consensus mechanisms and six types of mixed consensus mechanisms.From the perspective of principle realization,it systematically describes the principles of each kind of consensus mechanism,their advantages and disadvantages,and the operations required for the consistency of inductive nodes.From the perspectives of engineering applications,it analyzes the application scenarios of the consensus mechanisms,introduces blockchain projects,and compares the key performance between consensus mechanisms.Finally,it gives the solutions to the problems of energy consumption and efficiency faced by existing consensus mechanism studies,and discusses the further research directions including the reward and punishment mechanism,network slicing and storage slicing.
  • PENG Long, CHEN Junshi, AN Hong
    Computer Engineering. 2020, 46(12): 12-20. https://doi.org/10.19678/j.issn.1000-3428.0058008
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As the mainstream Molecular Dynamics(MD) simulation software,AMBER is widely used for researches in the microscopic movements in molecular systems.In order to use the massive computing resources of Sunway TaihuLight to accelerate the AMBER-based simulation of the movement process of molecular systems,AMBER is migrated to the master core of the SW26010 processor to build a master-slave acceleration model,so as to realize AMBER’s parallelization design for slave core.On this basis the master-slave asynchronous pipelining scheme is proposed.The local data cache Local Data Memory(LDM) and Direct Memory Access(DMA) channel techniques of slave core of SW26010 are used to address the low memory access speed and limited parallel memory access bandwidth of slave core.Also,part of slave core codes is vectorized through the SIMD command to further improve the computational performance of AMBER on Sunway TaihuLight.Test results show that the computational performance of the optimized AMBER hotspot functions is improved by 15 times,and the overall performance of the single-core group is improved by 4.6 times compared with the Intel Xeon Platinum 8163.
  • ZHANG Man, YAN Fei, YAN Gaowei, LI Pu
    Computer Engineering. 2020, 46(12): 21-26,35. https://doi.org/10.19678/j.issn.1000-3428.0057743
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In view of the fact that the traditional static partition algorithms of control sub-regions in road network cannot adapt to the dynamic changes of traffic flow in complex road networks,this paper proposes a dynamic partition algorithm based on Dirichlet problem.According to the density peak theory,the concept of local density is redefined to identify and recognize the stable blocks in the control sub-regions.On this basis,the Dirichlet problem solving model is integrated into the dynamic partition process,and the roads with low homogeneity are iteratively re-partitioned to realize the dynamic partition of the control sub-regions.It reveals the evolution process of the sub-regions when the traffic flow changes dynamically.Experimental results on the dataset of real road network in Farmer Branch,USA show that,compared with the static density peak partition algorithm,the proposed algorithm reduces the mean homogeneity of sub-regions and normalized total variance index by 22% and 11% respectively,and improves the homogeneity of the control sub-region compared with the two-layer dynamic partitioning algorithm.
  • WANG Zongwei, SHI Jinglin, FENG Xuelin, GOU Zhihang
    Computer Engineering. 2020, 46(12): 27-35. https://doi.org/10.19678/j.issn.1000-3428.0057247
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Symbol timing synchronization is a key step in the initial cell search of 5G systems.To address the poor frequency offset resistance and high calculation complexity of traditional timing synchronization algorithms,this paper proposes an improved timing synchronization algorithm based on Primary Synchronization Signal(PSS) for 5G systems.Based on the piecewise correlation algorithm,the proposed algorithm presets the normalized frequency offset for the PSS sequence,and utilizes the conjugate symmetry characteristic of the PSS sequence to pre-store it in the terminal after piecewise processing.In addition,the algorithm combines the convolution and overlap-preserving blocking methods to achieve fast correlation of the correlation windows of each segment,and then makes a threshold judgment after accumulating correlation value delays to complete the joint detection of timing synchronization and coarse frequency offset estimation.Simulation results show that,compared with the differential correlation algorithm and the piecewise correlation algorithm,the proposed algorithm effectively improves the anti-frequency offset performance of the system,reduces the computational complexity,and meets the timing synchronization requirements of 5G systems.
  • YAO Bofan, DENG Hongping, CAI Ming
    Computer Engineering. 2020, 46(12): 36-42. https://doi.org/10.19678/j.issn.1000-3428.0056942
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The experiment subjects of existing studies on mode classification of urban road traffic operation status are not diverse,and the applicability of the standard method is poor.To address the problems,this paper proposes a mode classification method of traffic operation status based on Gaussian mixture graded random sampling clustering.The method uses relative speed as the clustering indicator.Meanwhile,it utilizes the graded random sampling method to conduct random sampling from the roads of six main road grades that make up the urban road network.Different sampling numbers are set to conduct multiple clustering experiments,and the clustering results are compared.The experimental results show that,when the number of sampled roads is more than 3 000,the NMI index generally remains at more than 0.95 and the clustering results are basically stable.The most reasonable number of traffic status mode is 5,under which there is no obvious coincidence of clustering centers and the DBI index is the smallest.Compared with the national standard,FCM and K-means clustering methods,the proposed method has better classification performance.It complies with the distribution characteristics of the clustering indicator and has stronger correlation with the clustering indicator.
  • Artificial Intelligence and Pattern Recognition
  • TANG Hongxuan, WU Kaili, ZHU Mengmeng, HONG Yu
    Computer Engineering. 2020, 46(12): 43-51. https://doi.org/10.19678/j.issn.1000-3428.0056056
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Machine Reading Comprehension(MRC) is a question and answer task that automatically generates or extracts corresponding answers for a given text and specific questions.This task is of great significance to evaluating the understanding of computer systems for natural languages.Compared with traditional reading comprehension tasks,multi-document reading comprehension requires computation models with higher reasoning and comprehension capabilities.Therefore,this paper proposes a reading comprehension model based on multi-task joint training.The model is a joint learning model composed of a set of neural networks with different functions.It executes the two key steps,document selection and answer extraction,by imitating the basic way people reason and answer questions.The document selection process incorporates a relevance discrimination mechanism based on the attention matrix,which aims to establish the relationship between documents,while the answer extraction process uses a text-level bi-directional attention mechanism to find text clues related to the answer.The two parts are attached to a set of neural reading comprehension models to form a multi-document reading comprehension method based on joint learning.Experimental results on the HotpotQA dataset show that compared with the baseline model,the proposed model increases the EM value and F1 value by 2.1% and 1.7%,respectively.
  • LI Peng, MIN Hui, LUO Aijing, QU Haoyu, YI Na, XU Jiaqi
    Computer Engineering. 2020, 46(12): 52-59. https://doi.org/10.19678/j.issn.1000-3428.0056545
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    How to construct a reliable dynamic protein network is one of the key problems that affect the prediction of unknown protein functions or the recognition of protein complexes.However,the existing protein network construction methods and function prediction methods generally have low robustness and low prediction accuracy.Therefore,this paper proposes an improved dynamic protein network construction algorithm.In this paper,protein-protein interactions are modeled based on the evolutionary graph,and then the whole protein network is divided into dynamic subnets of multiple time slices based on the active cycle of protein.The relationship of protein-protein interactions among the subnets are determined according to the connection strength between proteins,so the global dynamic protein network is obtained.On this basis,a function prediction algorithm,IPA-PF,based on the function correlation score or neural network is proposed by examining the differences of function annotation between neighbor nodes of unknown functional proteins.The experimental results on several open biological datasets show that the proposed algorithm outperforms the HPMM,D-PIN,EFM and FP-BMD algorithms in terms of the recall rate,precision and F-measure,and it is insensitive to input parameters.On the premise of ensuring the accuracy of function prediction,the time complexity of the proposed algorithm is within a reasonable range.
  • LIAO Hanyue, ZENG Jianping, WU Chengrong
    Computer Engineering. 2020, 46(12): 60-66,72. https://doi.org/10.19678/j.issn.1000-3428.0056255
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Forum traffic prediction is of great significance to network planning and public opinion management,but existing linear prediction models fail to predict nonlinear relationships and the feature engineering of nonlinear prediction models is too complicated.To address the problems,this paper uses the historical time series as the feature to establish a model combining different algorithms to predict the number of forum posts.The four models of differential auto regressive moving average,long and short term memory neural network,Prophet and gradient lifting decision tree are used to predict the time series respectively.Then based on the idea of the weighted voting method,each model votes to select a dense interval of predicted values within each unit of time series.According to the density of the predicted values of each model,the weight of each prediction value is assigned,and then the final prediction result is obtained by weighted average.The experimental results show that compared with the arithmetic average model and the weighted average model based on Root Mean Square Error(RMSE),the proposed model reduces the values of RMSE and relative error of the prediction result.
  • TAN Wenan, WU Jiakai
    Computer Engineering. 2020, 46(12): 67-72. https://doi.org/10.19678/j.issn.1000-3428.0056206
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    For a large number of Web services with similar functions and different qualities,Web service composition optimization can enable them to meet different needs of customers and be widely used.To address the low search efficiency and inaccurate optimization of existing service composition optimization methods,this paper proposes an Improved Flower Pollination Algorithm(IFPA),which realizes the dynamic transformation between global search and local search to promote population optimization.The mutation and exchange of the differential evolutionary algorithm are added to FPA to enhance the efficiency and diversity of flowers.Also,the greedy strategy is used to select flowers with high fitness value to accelerate the convergence and enhance the optimization ability of the algorithm.The experimental results show that,compared with DE algorithm,KDE algorithm,FPA algorithm and EFPA algorithm,the proposed algorithm has faster convergence speed and better optimization performance in solving service composition problem.
  • KANG Yan, LI Tao, LI Hao, ZHONG Sheng, ZHANG Yachuan, BU Rongjing
    Computer Engineering. 2020, 46(12): 73-79,87. https://doi.org/10.19678/j.issn.1000-3428.0056234
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problems of existing collaborative filtering recommendation algorithms,such as low interpretability and difficulty in information extraction based on content recommendation and low recommendation efficiency,this paper proposes a hybrid recommendation model fusing with knowledge graph and collaborative filtering.The model is composed of the RCKD model and the RCKC model,the former combining knowledge graph and deep learning,and the latter combining knowledge graph and collaborative filtering.After obtaining the inference path of knowledge graph,the RCKD model uses the TransE algorithm to embed the path into vector,and captures the semantics of path inference by using LSTM and the soft attention mechanism.Then the importance of different path inferences is distinguished through pooling operation,and the prediction score is obtained through the full connection layer and the sigmoid function.According to the semantic similarity of knowledge graph representation learning,the RCKC model uses the collaborative filtering algorithm to obtain the prediction score.The two models are fused with each other according to the accuracy of the prediction score,and finally the interpretable hybrid recommendation model is obtained.The experimental results on the MovieLens data set show that the proposed model has better recommendation interpretability and higher recommendation accuracy than the RKGE model,RippleN model and the classical collaborative filtering algorithms.
  • PANG Zhihua, QI Chenkun
    Computer Engineering. 2020, 46(12): 80-87. https://doi.org/10.19678/j.issn.1000-3428.0056542
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the low accuracy and robussness of the monocular visual odometry,this paper proposes a monocular visual odometry algorithm that combines the tree-based feature matching algorithm and the real-time map update strategy.The tree-based feature matching algorithm does not have motion assumptions,and can quickly and reliably establish feature matching relationships for various motions,ensuring the real-time performance and robustness of the algorithm.The real-time map update framework separates map expansion from optimization,which ensures the real-time performance of the algorithm and enables the current frame to track enough 3D map points for fast or violent motions,which greatly improves the robustness of the algorithm.In addition,in order to reduce the impact of real-time map updates on the accuracy of map points,a parallax-based weight matrix estimation method is proposed to make high-precision map points dominate the optimization functions to ensure the accuracy of the algorithm.The experimental results on two public datasets show that compared with the ORB-SLAM algorithm,the proposed algorithm has higher location accuracy,and has better robustness for fast or violent motions of the camera.
  • LU Shuxia, CAI Lianxiang, ZHANG Luohuan
    Computer Engineering. 2020, 46(12): 88-95,104. https://doi.org/10.19678/j.issn.1000-3428.0056286
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the actual classification problem,there is often a certain amount of noise in the data caused by the influence of artificial or other factors,so it is very important to improve the anti-noise ability of the classifier.However,the hinge loss function used by the traditional Support Vector Machine(SVM) is sensitive to noisy data and has poor classification performance.In order to eliminate the influence of noisy data,this paper proposes a robust SVM based on momentum acceleration zero-order variance reduction.By introducing a new form of loss function and adopting the idea of margin distribution,a robust SVM optimization model is established to improve the anti-noise ability of SVM.By using the zero-order variance reduction algorithm and momentum acceleration technique,a new optimization model solution method is proposed.Experimental results show that this method reduces the influence of variance effectively by introducing the gradient correction item,and increases the convergence speed of the algorithm significantly by using the momentum acceleration technology.
  • ZHANG Yusheng, ZHANG Guizhu, WANG Xiaofeng
    Computer Engineering. 2020, 46(12): 96-104. https://doi.org/10.19678/j.issn.1000-3428.0056245
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of the traditional recommendation algorithms fail to effectively solve the cold start problem for they do not consider the exposure factor.To this end,this paper introduces the exposure hidden variable and proposes a hybrid recommendation algorithm based on Variational Auto-Encoder(VAE).In the context of collaborative filtering,the algorithm uses Markov Chain Monte Carlo(MCMC) sampling to make the inference of the exposure hidden variable and feature vectors.In the inference process,the distribution result obtained from the previous iteration is used as a priori,and the conjugate relationship is used to directly obtain the posteriori of parameters to improve the inference accuracy.On this basis,the VAEe is used to extract the hidden features of the user’s exposure vector,so as to make the exposure prediction for the user,and the VAEi is trained to extract the collaborative hidden features of the product,so as to solve the cold start problem of the new product.Experimental results on the real world dataset show that the proposed algorithm improves the recommending performance of the old products and new products at the same time.
  • YANG Xiaomei, GUO Wenqiang, ZHANG Juling
    Computer Engineering. 2020, 46(12): 105-112,133. https://doi.org/10.19678/j.issn.1000-3428.0058766
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the ability of network structure optimization,this paper proposes an improved deep neural network structure search method.Aiming at the problem that it is difficult to measure the distance between network structures,a graph-based method is proposed to measure the distance between deep neural network structures based on the structure search scheme of neural network.By analyzing the performance of network structure that is respectively trained by a small number of steps and fully trained,a method to find the diverse optimal network structure is proposed based on the advantages of diversity solutions.Experimental results show that the neural network structure search method can effectively improve the solution quality and help in finding a better network structure.
  • LI Yanhui, ZHENG Chaomei, WANG Weili, YANG Xin
    Computer Engineering. 2020, 46(12): 113-119,141. https://doi.org/10.19678/j.issn.1000-3428.0055734
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the emotion analysis of Chinese-English code-mixed microblog texts,this paper proposes a new multi-dimensional and multi-emotion analysis method.The Chinese-English code-mixed texts are translated into Chinese and English respectively,and features of the texts in different language dimensions are extracted.Then the corresponding semantic information is extracted based on the Chinese-English mixed language,Chinese and English.By combining the three kinds of semantic information extracted from different language dimensions,a multi-emotion classification model is constructed and trained for fine tuning.Finally,the classification result is obtained based on the emotional probability.Experimental results show that compared with simple processing methods,the proposed method more comprehensively extracts the semantic features of Chinese-English code-mixed texts,accurately distinguishes multiple emotions in the text,and has a better classification effect.
  • Cyberspace Security
  • HAN Jialiang, XU Ming
    Computer Engineering. 2020, 46(12): 120-126. https://doi.org/10.19678/j.issn.1000-3428.0056745
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Message authentication allows the receiver of a message to detect whether the message is forged or illegally modified by someone other than the legitimate sender.Traditional message authentication schemes are typically implemented at the network layer or higher,which are vulnerable to security threats such as replay attacks,denial of service attacks and so forth.Based on the existing schemes based on the physical layer,this paper proposes a message authentication scheme using the Minimum Mean Square Error(MMSE) method for estimation of channels under the Gaussian vector multiple-input channel model.An optimal attack strategy for the eavesdropper is also formulated,and determines the reachable secrecy boundary according to the probabilities of successful attacks of the eavesdropper.By using the information theory to analyze the maximum security authentication rate of the channel,the secrecy capacity domain of the channel is obtained.Experimental results show that the average probability of successful attack of the eavesdropper decreases exponentially with the increased number of received messages.When the spatial correlation coefficients between all senders and the eavesdropper are lower than 0.3,the average probability of successful attack of the eavesdropper is less than 1.87×10-7,which verifies the security of the proposed scheme.
  • HU Qingshuang, LI Chenghai, LU Yanli, SONG Yafei
    Computer Engineering. 2020, 46(12): 127-133. https://doi.org/10.19678/j.issn.1000-3428.0059022
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Network Security Situation Prediction(NSSP) based on the Belief Rule Base(BRB) combines qualitative empirical knowledge with quantitative network data,and has good prediction effect.However,when the training data is not evenly distributed,the traditional prediction method for overall optimization tend to cause overfitting,which leads to a low prediction accuracy.To address the problem,this paper considers the limited scope of rules in the BRB,and proposes a NSSP method based on Hierarchically Optimized Belief Rule Base(HOBRB).The action space of the model is established and the rule scope is divided.Then the training data is allocated to the corresponding rule scope according to the input coordinates.By setting the critical value,the rules are divided into three levels:the fully optimizable ones,the partially optimizable ones,and the non-optimizable ones.Meanwhile,the number of parameters to be optimized in the rules is reduced.Experimental results show that compared with GAO-BRB,PSO-BRB and other prediction methods,the proposed method can effectively avoid overfitting,and improve the prediction accuracy of network security situation.
  • SUN Jiahao, MENG Xiangsi, ZHANG Haoyun, CHANG Xiaolin, XU Yan, GUAN Zhuang
    Computer Engineering. 2020, 46(12): 134-141. https://doi.org/10.19678/j.issn.1000-3428.0056097
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the problems faced by digital works on Internet,such as the difficulty of intellectual property registration,rampant piracy and confusion of property right transactions,this paper constructs an intellectual property protection model using blockchain based on the improved PBFT algorithm.The model proposes an improved PBFT consensus mechanism based on credit,which can realize efficient,low-cost and scalable intellectual property protection.The smart contracts are designed for intellectual property registration and transfer,and the automatically executed preset mechanism is used to ensure the efficiency and transparency of the model.The experimental results show that the proposed model reduces the communication overhead between nodes,increases the honesty probability of the master node,and has stronger robustness.
  • GUI Qiong, Lü Yongjun, CHENG Xiaohui
    Computer Engineering. 2020, 46(12): 142-149,184. https://doi.org/10.19678/j.issn.1000-3428.0056045
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In view of the problem of the privacy leakage caused by similarity attacks,this paper proposes a (r,k)-anonymous model.Based on the semantic association between sensitive attributes,the proximity resistance threshold r is set,and an anonymous method Generalized Data for Privacy Proximity Resistance (GDPPR) that satisfies the model is designed.The fuzzy clustering technique is used to complete the cluster partitioning,and the distance matrix is obtained by combining the dissimilarity of sensitive attributes.Therefore,the frequency of taking values of sensitive attributes under the proximity semantics in each equivalence class is kept under the threshold r and the data availability is ensured.Experimental results on two standard datasets show that GDPPR can satisfy the (r,k)-anonymity model.It effectively resists similarity attacks,and reduces the information loss caused by generalization.
  • YANG Xiaodong, PEI Xizhen, CHEN Guilan, WANG Meiding, WANG Caifen
    Computer Engineering. 2020, 46(12): 150-156,192. https://doi.org/10.19678/j.issn.1000-3428.0056369
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Cloudstorage can help save local storage space when dealing with massive data,but increases the risk of data loss or damage.Although the existing audit schemes can verify the integrity of cloud data,it is mainly used in single-user single-replica environment,and does not support user revocation.Also,the calculation cost of dynamic data update is high.To solve the problem,this paper proposes a multi-user and multi-replica public audit scheme for cloud data based on secret sharing technology and multi-branch path tree.The scheme introduces the proxy re-signature algorithm to realize the secure user revocation function,and uses the multi-branch path tree to dynamically update cloud data,including modification,insertion and deletion.The security and computational efficiency of the scheme are analyzed.Experimental results show that the proposed scheme satisfies the robustness requirements of audit and can resist the collusion attacks of cloud server and revoked users.Compared with the similar multi-replica data integrity schemes,the proposed scheme has higher computational efficiency in the signature and challenge response phase.
  • WU Kun, WEI Guoheng
    Computer Engineering. 2020, 46(12): 157-162,200. https://doi.org/10.19678/j.issn.1000-3428.0055577
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Broadcast authentication is an important part of Wireless Sensor Network(WSN) security.To address the high resource consumption and low security in traditional broadcast authentication schemes,this paper proposes a non-bilinear certificateless broadcast information authentication scheme using the Elliptic Curve Cryptography(ECC) algorithm.This scheme calculates the Elliptic Curve Discrete Logarithm(ECDL) to achieve security authentication,and shares the intermediate calculation results through the cooperation between neighboring nodes,so as to reduce the calculation cost of nodes.Through formal analysis,the security of the scheme against two kinds of attackers is demonstrated under the random oracle model.The scheme also has multiple security attributes such as resistance to key hosting,resistance to replay attacks and resistance to denial of service attacks.Experimental results show that the proposed scheme consumes less energy to perform a broadcast information authentication,and extends the network life cycle,which makes it suitable for resource-constrained WSN.
  • Mobile Internet and Communication Technology
  • HU Dongliang, QIN Xiaojun, WANG Xiaofeng
    Computer Engineering. 2020, 46(12): 163-170. https://doi.org/10.19678/j.issn.1000-3428.0056018
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Network scanning is an important means of network security evaluation and network management.The traditional single-point active scanning method and tools including Zmap and Nmap suffer from limited bandwidth resource utilization,low scanning efficiency and significant CPU usage.This paper proposes a distributed network scanning architecture and task-scheduling algorithm based on the distributed network scanning technology of message middleware.The algorithm uses message middleware to synchronize information and return scanning results,and constructs a task-scheduling model for distributed network scanning.The experimental results show that compared with the traditional single-point active scanning technology,the proposed distributed network scanning technology based on message middleware can ensure the scanning accuracy while reducing CPU usage and scanning response time by about 10%.
  • MA Qianli, YUAN Yi, SHEN Zhaohui
    Computer Engineering. 2020, 46(12): 171-178. https://doi.org/10.19678/j.issn.1000-3428.0056264
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To relieve the network congestion caused by the large number of nodes deployed in Wireless Sensor Networks(WSNs),this paper proposes a relay node deployment algorithm based on node weight and edge length to reduce the number of network nodes.The deployment mode of relay nodes is improved in heterogeneous environment.The edges are weighted based on the distance between two nodes and the properties of the nodes,and then are sorted according to their weight and side length.By using the Minimum Spanning Tree(MST) algorithm and the graph increment theory,the joining conditions of relay nodes are changed,and the relay nodes are added in an iterative manner to reduce the number of relay nodes in the same environment.Results of different scales of simulation experiments show that compared with GA-RD and IWGA-RD algorithm,the proposed algorithm has fewer relay nodes and better network performance.In the case of large sample,it can significantly reduce the network loads and deployment cost.
  • MA Yiming, SHI Zhidong, ZHAO Kang, GONG Changlei, SHAN Lianhai
    Computer Engineering. 2020, 46(12): 179-184. https://doi.org/10.19678/j.issn.1000-3428.0056965
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To solve the nonlinear equation problem of indoor Time Difference of Arrival(TDOA) localization,this paper proposes a localization algorithm based on improved Harris Hawk Optimization(HHO),maintaining the optimization mechanism while enhancing the performance of HHO.The proposed algorithm improves the fitness function based on maximum likelihood estimation to obtain better fitness value in the optimization process,which increases the optimization accuracy.Meanwhile,the initial solution is introduced into the initial population position,which reduces unnecessary global search and improves the convergence speed of the algorithm without affecting the population diversity.Simulation results show that,compared with DHHO/M,EWOA,IALOT and CSSA algorithms,the proposed algorithm has higher localization accuracy and convergence speed.
  • SHI Zhao, SUN Changyin, JIANG Fan
    Computer Engineering. 2020, 46(12): 185-192. https://doi.org/10.19678/j.issn.1000-3428.0056421
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    millimeter-Wave(mm-Wave) communication is expected to provide significant capacity gains in ultra-dense network scenarios of the 5G wireless communication system.To address the complex interference in the mm-Wave communication scenario and the interruption caused by the high block rate of the dynamic links of the cell edge users,this paper proposes a power allocation strategy scheme based on Q-Learning algorithm considering the high intermission rate of mm-Wave communication.Poisson Cluster Process(PCP) is used in the modelling of randomly deployed base station user systems,and the different influences of link block on the useful signals and interference signals are analyzed.Then the egoistic and altruistic strategy is introduced in the design of state and reward function of the Q-Learning algorithm,and the machine learning strategy is used to get the optimal solution to power allocation.Simulation results show that,compared with the CDP-Q scheme that does not consider the link block rate,the proposed algorithm significantly improves the total capacity of the system due to the optimal power allocation based on the dynamic status of links.
  • LIU Chunling, LIU Minti, DING Yuanming
    Computer Engineering. 2020, 46(12): 193-200. https://doi.org/10.19678/j.issn.1000-3428.0056784
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To solve the problem of weak ability of Unmanned Aerial Vehicle(UAV) cluster network to resist intelligent interference in complex communication environment,this paper proposes a multiple domain cognitive anti-jamming algorithm based on the intelligent decision-making theory.Based on the dominant actor-critic algorithm,the algorithm regards UAV as an agent and the jamming channel is determined by the perceived environmental spectrum state.Based on the Stackelberg game theory,the interference signals of the middle-level interference channels are suppressed from the power domain to reduce the time cost of channel switching.Cluster head assistance is introduced to solve the problem of low success rate of channel decision-making due to weak local spectrum sensing ability of a single agent.Simulation results show that compared with the QL-AJ algorithm and AC-AJ algorithm,the proposed algorithm can give the optimal number of nodes in the cluster,improves the Signal to Interference-plus Noise Ratio(SINR) of the received signals,and provides better overall anti-jamming performance for networks.
  • LING Feng, DUAN Jianlan, LI Chuanwei, JIANG Jianchun
    Computer Engineering. 2020, 46(12): 201-206,221. https://doi.org/10.19678/j.issn.1000-3428.0056602
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To smooth the load imbalancing across the edge servers and improve resource utilization in the scenarios of Cellular Vehicle to Everything(C-V2X) integrated with Mobile Edge Computing(MEC),this paper proposes a dynamic load balancing algorithm.The algorithm monitors the real-time running status of the edge servers to adjust the weight of each load index dynamically,so that the actual load status of edge servers can be accurately evaluated.Then the algorithm considers the threshold,mean value and the standard deviation of loads of the edge server cluster to perform reasonable task allocation.Experimental results show that compared with the traditional random polling algorithm and minimum traffic balancing algorithm,the proposed algorithm can better smooth the load imbalancing in the edge server cluster,and reduce the time spent for tasks.
  • Graphics and Image Processing
  • JIA Ruiming, LI Yang, LI Tong, CUI Jiali, WANG Yiding
    Computer Engineering. 2020, 46(12): 207-214. https://doi.org/10.19678/j.issn.1000-3428.0056477
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The monocular image depth estimation based on Convolutional Neural Network(CNN) is faced with inaccurate depth information,fuzzy edge and missing details.Therefore,this paper proposes a deep convolutional network with multiple level feature fusion structure.The network adopts the end-to-end encoder and decoder structure.The encoder uses ResNet101 network structure to convert the image into a high-dimensional feature map.The decoder uses an up-sampling convolution module to reconstruct a depth image from the high-dimensional feature map,and fuses the features of different levels in the encoder and decoder.The experimental results on the NYUv2 dataset and KITTI dataset show that compared with other advanced networks,the network can not only predict more accurate depth information,but also keep the edge information of the predicted depth image.
  • ZHENG Shanshan, LIU Wen, SHAN Rui, ZHAO Jingyi, JIANG Guoqian, ZHANG Zhi
    Computer Engineering. 2020, 46(12): 215-221. https://doi.org/10.19678/j.issn.1000-3428.0056791
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To solve the low classification accuracy caused by the small number of training samples and high spectral dimension,this paper proposes a Hyperspectral Image(HSI) classification method based on improved multi-scale Three-Dimensional(3D) residual Convolutional Neural Networks(CNN).An appropriate convolution step size is selected to reduce the dimension of the first-layer spectrum of the network and extract the shallow features.The maximum pooling layer in the 3D convolution filter bank is used to reduce the training parameters of the whole network.The multi-scale filter bank and the 3D residual unit are improved to extract the deep local spatial-spectral joint features of the image,which are input into the Softmax function layer to predict the class label samples.Experimental results show that the overall classification accuracy of this method is 99.33% and 99.83% respectively on Indian Pines and Pavia University hyperspectral datasets.Compared with SVM and SAE methods,the proposed method can extract more accurate classification features and has higher image classification accuracy.
  • YU Haiwen, YI Xinwei, XU Shaoping, LIN Zhenyu
    Computer Engineering. 2020, 46(12): 222-230,237. https://doi.org/10.19678/j.issn.1000-3428.0058327
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the denoising performance of the Fast and Flexible Denoising Convolutional Neural Network(FFDNet),this paper proposes a Noise Level Estimation(NLE) model that estimates the level of noise.The estimation result is input into the FFDNet model,and the NLE model is taken as the preceding module of the FFDNet deep denoising model to transform it into a blind denoising model.Then the shallow convolutional neural network model is used to separate noise signals from noisy images to obtain the noise map,the standard deviation of which is taken as the initial estimated value of the noise level.Considering the fact that there exists strong correlation between the initial estimated value and ground-truths of the noise level,a Back-Propagation(BP) neural network model is used to correct the initial estimated value of noise level.Experimental results show that when the proposed NLE model works with the FFDNet model,its denoising performance is close to that of the FFDNet denoising model which uses the ground-truths of noise level.For most of the noise level values,the difference of Peak Signal to Noise Ratio(PSNR) values between the two models is within 0.1 dB,which means the estimation results of the proposed NLE model are similar to the ground-truths of noise level,bringing the fast and flexible characteristics of the FFDNet model into full play.
  • LI Xiangjun, ZHOU Yong, LIU Tao, LIU Bocheng, LUO Ming
    Computer Engineering. 2020, 46(12): 231-237. https://doi.org/10.19678/j.issn.1000-3428.0056296
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To realize accurate contour matching and part identification in object identification,this paper proposes an improved accurate contour matching and analysis method based on minimum point-pair cost.The method uses interactive segmentation to learn different types of contour analysis parameters and prototypes,and builds a knowledge base of class contour prototypes.Based on the prototype knowledge base,two matching strategies are introduced for contour matching:coarse-to-fine secondary matching and accurate matching with minimum point-pair cost.The former strategy can effectively reduce the sensitivity of the matching process to the changes of contour details.The latter strategy can ensure that matching is translation-invariant,rotation-invariant,mirror-invariant and scale-invariant,and can present the matching results intuitively.The experimental results on the Animal dataset show that the method has high accuracy in object identification,including part segmentation,contour identification and part identification,and can accurately identify the category of contour and its parts at the same time.
  • HU Xinrong, LIU Jiawen, LIU Junping, PENG Tao, HE Ruhan, HE Kai
    Computer Engineering. 2020, 46(12): 238-246,253. https://doi.org/10.19678/j.issn.1000-3428.0056751
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional non-contact body measurement methods extract key feature points directly based on the proportion of body parts,which tends to cause large errors for its strict requirements on body shapes and dresses.Therefore,this paper proposes an algorithm for multi-feature point extraction and measurement of dressed human bodies,Human pesm-abss,which uses Adaptive Body Structure Segmentation(ABSS).The algorithm analysis the differences of shape between Eastern and Western human bodies,and uses ABSS to segment the critical parts of human body structure.To extract the feature points in the neck and shoulder,this paper gives the maximum distance method and local maximum curvature method to solve the problem of poor adaptability and low robustness of traditional algorithms.The experimental data of 210 samples with large standard deviation is compared with the real size information.The results show that compared with the Unclosed Snake model and Simple-FCN-ASM model,the Human pesm-abss algorithm reduces the average error by 2.2 cm and 0.26 cm respectively,and reduces the time consumption by 1.098 s and 3.552 s.The algorithm has better real-time performance and robustness,and can find application in the online in-batch measurement of dressed human bodies.
  • QI Yongfeng, MA Zhongyu
    Computer Engineering. 2020, 46(12): 247-253. https://doi.org/10.19678/j.issn.1000-3428.0056176
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the accuracy of headposture estimation in real scenes,this paper proposes a head posture estimation method using deep residual network.The deep residual network RestNet101 is used as the backbone network,and the optimizer is introduced to improve the gradient stability of the deep convolution network training.The cross entropy loss is calculated by using the RGB image and the classifier.At the same time,the Euler angle is predicted combined with the regression loss to represent the head posture.Experimental results show that,compared with the FAN landmark detection method and non key point fine-grained method,the proposed method has a smaller average absolute error on AFLW2000 dataset and BIWI dataset,which reaches 5.396 and 2.922,respectively.The accuracy of the method is over 95% on 300W_LP dataset,and the method has good robustness in real scenes.
  • LIANG Wentao, KANG Yan, LI Hao, LI Jinyuan, NING Haoyu
    Computer Engineering. 2020, 46(12): 254-261,269. https://doi.org/10.19678/j.issn.1000-3428.0056209
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Compared with common image classification tasks,the classification of remote sensing images has a wider feature range and more complex distribution,which makes it difficult to achieve accurate classification.In view of the adaptive relationship between the feature distribution of remote sensing images and the structure of neural network,this paper proposes a remote sensing scene classification model using adaptive neural network based on complexity-adaptive clustering.The complexity evaluation matrix of remote sensing images is constructed,which includes multiple features including color moment,gray level co-occurrence matrix,information entropy,information gain and line ratio.Image subsets with different complexity degrees are obtained by calculating image similarity.The image complexity is divided into high,medium and low levels by using hierarchical clustering.Then the complexity-adaptive image subsets are trained by using DenseNet,CapsNet and SENet to obtain the adaptive remote sensing scene classification model.Experimental results show that compared with DenseNet,CapsNet,SENet and other models,this model has better performance in extracting image features with different complexity degrees,and has higher accuracy of remote sensing scene classification.
  • LI Li, ZHANG Haoyang, QIAO Lu
    Computer Engineering. 2020, 46(12): 262-269. https://doi.org/10.19678/j.issn.1000-3428.0056338
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the accuracy of benign and malignant identification of pulmonary nodules,this paper proposes an SFDG model for benign and malignant identification of pulmonary nodules with improved Deep Convolutional Generative Adversarial Network(DCGAN) framework and semi-supervised Fuzzy C Means(FCM) clustering.Firstly,input lung nodule images with benign and malignant grade labels are input into the DCGAN framework,which enables the discriminator network with only the source classification ability to classify lung nodules.Then,the semi-supervised FCM clustering method is added into the discriminating process,performing clustering analysis on the raw data set after the model extracts and quantifies the features of input lung nodule images.The network parameters are adjusted by comparing the output category probability and discriminant result of the current image with the actual result.Finally,the recognition accuracy of the model is improved by setting the maximum probability of weighted loss function.Through training a network model with strong ability to identify benign and malignant pulmonary nodules is obtained.The experimental results show that the discriminator network of the improved model has a good ability to classify benign and malignant pulmonary nodules with an accuracy of 90.96%.
  • Development Research and Engineering Application
  • WAN Pei, SANG Shengbo, ZHANG Chengran, ZHANG Bo
    Computer Engineering. 2020, 46(12): 270-275. https://doi.org/10.19678/j.issn.1000-3428.0056449
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The traditional Blood Pressure(BP) measurement model based on pulse wave transit time method and pulse wave characteristic parameter method has disadvantages such as low accuracy.This paper proposes a new continuous blood pressure estimation model.The model can automatically extract the necessary features and their time-domain changes,and reliably estimate blood pressure in a non-invasive and continuous manner.The model consists of two layers.The lower layer uses Artificial Neural Network(ANN) to extract necessary morphological features from Electrocardiogram(ECG) and photo Photoplethysmographic(PPG) waveforms.The higher layer uses the Long Short-Term Memory(LSTM) network layer to account for the time domain changes of the features extracted by the lower layer.The proposed model is evaluated on 69 subjects under the standard of the Association for the Advancement of Medical Instrumentations(AAMI).Experimental results show that the proposed model has higher prediction accuracy than Deep-RNN and other BP estimation models based on ECG and PPG feature parameters.
  • LI Hao, HUO Wen, PEI Chunying, YUAN Yaoyao, KANG Yan
    Computer Engineering. 2020, 46(12): 276-282. https://doi.org/10.19678/j.issn.1000-3428.0056093
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the efficiency of taxi market management and operation,and maximize the taxi benefit,this paper proposes a multi-region taxi order forecasting model based on VGG network and Fully Convolutional Networks(FCN) using map rasterization.The taxi trajectory data is converted into order images,and the full connection layer of VGG network is removed while only the main structure is retained to reduce the number of model parameters.The deep convolution in the network is used to extract taxi driving characteristics in different spatial regions,and the taxi order images of the next time period is reconstructed by sampling on the deconvolution layer.So the forecasting data of the taxi orders of different regions and periods is obtained and presented as order images on the map.Experimental results show that compared with BP,RBF and other forecasting models,the prediction results of the proposed model has a higher average accuracy and a lower Root Mean Square Error(RMSE).It can quickly predict the distribution of taxi orders in different regions.
  • LIU Yicheng, LIAO Luchuan, ZHANG Jing, WU Hao, HE Ling, DAI Hongning, ZHANG Han, YANG Gang
    Computer Engineering. 2020, 46(12): 283-289,298. https://doi.org/10.19678/j.issn.1000-3428.0056288
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Many factors such as the interference and the small fuselage of Unmanned Aerial Vehicle(UAV) pose challenges to the high precision detection of the UAV in visible image sequences.Therefore,this paper proposes a new UAV detection method.The shape change of the flying object is captured by the turntable camera.Then the trajectory of the small moving target is obtained by using the trajectory clustering algorithm.The trajectory characteristics and morphological characteristics of the target are extracted and fused,and on this basis the target is identified through the Artificial Neural Network(ANN).At the same time,the small-range search algorithm is used to track the target and the jamming radio is used to suppress the UAV.Experimental results show that the method increases its UAV and bird detection accuracy to 99.53%,and can provide real-time detection,recognition and tracking of these targets.
  • XIE Kun, RONG Yutian, HU Fengping, CHEN Huan, YAO Xiaolong
    Computer Engineering. 2020, 46(12): 290-298. https://doi.org/10.19678/j.issn.1000-3428.0055891
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The historical data used for sales forecasting has the characteristics of sparseness and volatility,the traditional statistical or machine learning prediction algorithms for prediction perform poorly when the prediction cycle is long.Therefore,based on the integration idea of Random Forest(RF) and the random partition and reorganization of training data set,this paper proposes a RF algorithm based on data integration.The algorithm reconstructs the original one-dimensional prediction variable into high-dimensional variables by random recombination,and takes the output summation value as the final prediction value.The experimental results show that compared with traditional algorithms including ARIMA,RF and GBDT,the prediction performance of this algorithm on the actual data set has been significantly improved.At the same time,extended experiments show that the data integration can also be applied to ARIMA algorithm,and the prediction accuracy of the algorithm is improved by about 3%.
  • LIU Jie, WANG Zheng, WANG Hui
    Computer Engineering. 2020, 46(12): 299-304,312. https://doi.org/10.19678/j.issn.1000-3428.0056577
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The application of Mutual Information(MI) and Naive Bayes(NB) algorithm to spam filtering is faced with feature redundancy and invalid independence assumption.To address the problem,this paper proposes an Improved Mutual Information-Weighted Naive Bayes(IMI-WNB) algorithm.As for the low efficiency of mutual information,an improved feature selection algorithm based on MI is proposed by introducing the word frequency factor and inter-class difference factor in order to achieve more efficient feature dimensionality reduction.To solve the problem of independence assumption of NB classification algorithm,the Improved Mutual Information(IMI) value is used for feature weighting in NB classification,which eliminates the adverse effect of part of the NB conditional independence assumption on mail classification.The experimental results show that compared with the traditional NB algorithm,the proposed algorithm improves the accuracy,recall rate and stability of spam filtering.
  • SHI Yuanhao, ZHANG Jianming, XU Zhengyi, TENG Guowei
    Computer Engineering. 2020, 46(12): 305-312. https://doi.org/10.19678/j.issn.1000-3428.0056883
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Zero velocity correction is used to eliminate the cumulative error inthe multi-mode Pedestrian Dead Reckoning(PDR) algorithm.To solve the problem,this paper proposes an adaptive zero velocity detection algorithm based on Long Short-Term Memory(LSTM) network.An LSTM network is constructed to extract the triaxial acceleration,triaxial angular velocity and time sequence characteristics of zero velocity interval under different motion modes.The adaptive zero velocity detection is realized by optimizing the zero velocity detection algorithm.On this basis,the Differential Evolution(DE) Algorithm is used to fuse positioning methods such as Bluetooth to correct the accumulated errors.Experimental results show that compared with the traditional Kalman filter algorithm,the proposed algorithm reduces the absolute positioning error from 6.43 m to 0.94 m,and the relative error from 1.246% to 0.182%.
  • WANG Zhihua, LIU Pingzeng, SONG Chengbao, SONG Yunsheng, ZHANG Chao, ZHANG Yan, WU Xiaotong, ZHAO Kun
    Computer Engineering. 2020, 46(12): 313-320. https://doi.org/10.19678/j.issn.1000-3428.0056262
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The data of traditional traceability systems for agricultural products are stored in a centralized way and the traceability process is solidified,which lead to unreliable traceability results and poor system flexibility.In order to solve the problem,this paper proposes a flexible and reliable traceability solution for agricultural products using blockchain technology.The system mode of "one hyperledger for each step" is established to reduce the complexity of storage structure in order to achieve reliable traceability.The dynamic traceability mechanism is used to enable the system to flexibly adapt to different production scenarios.The hyperledger is used as the implementation method for blockchain,and the key traceability data is encrypted and stored in a distributed way to improve the reliability of the traceability results.Ginger products are taken as the traceability object,and the correlations between upstream and downstream products of the industrial chain are analyzed to determine the granularity of the traceability object,the content of hyperledger and data format,so as to realize the verification of a flexible and reliable blockchain-based traceability system model.Analysis and application results show that the solution can effectively improve the security of traceability information and the credibility of traceability results,and can adapt to the production needs of different scenarios,with its system flexibility greatly improved compared with the traditional agricultural product traceability systems.