Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 September 2020, Volume 46 Issue 9
    

  • Select all
    |
    Hot Topics and Reviews
  • SU Jiongming, LIU Hongfu, XIANG Fengtao, WU Jianzhai, YUAN Xingsheng
    Computer Engineering. 2020, 46(9): 1-15. https://doi.org/10.19678/j.issn.1000-3428.0057951
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Deep Neural Networks(DNN) are featured by non-linear non-convex properties,multiple hidden layers,feature vectorization,massive model parameters,etc.However,its weak interpretability has strangled their theory development and practical applications,so the interpretation methods for DNN have attracted attention from artificial intelligence researchers.In view of the strong requirements for the interpretability of DNN in high-risk decision-making fields such as military,finance,medicine,and transportation,this paper comprehensively combs and analyses typical network interpretation methods for typical networks such as Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),and Generative Adversarial Networks(GAN).It also summarizes and compares existing interpretation methods.Then,based on the current development trend of DNN,the future research directions of interpretation methods are prospected.
  • SHANG Diya, SUN Hua, HONG Zhenhou, ZENG Qingliang
    Computer Engineering. 2020, 46(9): 16-26. https://doi.org/10.19678/j.issn.1000-3428.0057520
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Automated deep learning is one of the new research hotspots in the field of deep learning.Neural architecture search algorithms are frequently used for the implementation of automated deep learning,as they can automatically design neural network structure by defining different search space,search strategy or optimization strategy.This paper introduces the development history of evolutionary algorithms and evolutionary neural networks.Then it introduces different methods and processes of using evolutionary algorithms as the search strategy to implement neural architecture search,and compares the features and development status of these neural architecture search algorithms.On this basis,this paper discusses the search space,search strategy and future development direction of neural architecture search algorithms.
  • XING Hu, CHEN Rong, TANG Wenjun
    Computer Engineering. 2020, 46(9): 27-34,43. https://doi.org/10.19678/j.issn.1000-3428.0057768
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on the diversity of Spatial Crowdsourcing(SC) tasks,this paper constructs a task allocation model for SC,and proposes an online task allocation strategy based on the prediction algorithm.In the batch processing mode,the problem of Maximum Score Assignment(MSA) is transformed into the problem of searching for the maximum weighted bipartite graph matching.The Hungarian algorithm is used to solve the problem to obtain the maximum score of each time interval,and the prediction algorithm is used to ensure workers that have completed the task are in the task-intensive regions as far as possible,so the possibility of workers finding no suitable task to execute is reduced,and the optimal online task allocation of the model is implemented.Experimental results on the real dataset provided by Didi show that compared with the BASIC,LLEP and CDP strategies,the proposed strategy can improve the total number of task assignment in the whole time interval by up to 10%,and has a higher task allocation efficiency and quality.
  • LIU Xu, ZHANG Xihuang, LIU Zhao, LÜ Xiaojing, ZHU Guanghui
    Computer Engineering. 2020, 46(9): 35-43. https://doi.org/10.19678/j.issn.1000-3428.0056967
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Cosmological simulations are essential for scientists to study the formation of non-linear structures and hypotheses of dark matter,dark energy,etc.High-precision cosmological simulations include hundreds of billions or even trillions of particles,thus demanding massive computational power.So supercomputers can provide an ideal platform for cosmological simulation.To implement cosmological N-body simulation on Sunway TaihuLight,a supercomputer developed in China,this paper analyzes the Particle Mesh(PM) and Fast Multipole Method(FMM) in PHoToNs.The analysis results are combined with the multi-core processor structure,and on this basis this paper proposes multiple performance optimization techniques,including a multi-level decomposition and load balancing scheme,a pipeline strategy using execution tree traversal and gravity calculation,and a vectorized gravity calculation algorithm.By using the above techniques,a N-body simulation software,SwPHoToNs,is implemented,which can give full play to the structural advantages of Sunway TaihuLight.Experimental results show that when conducting cosmological simulations which contain up to 640 billion particles on 5 200 000 cores of Sunway TaihuLight,SwPHoToNs obtains a sustained calculation speed of 29.44 PFLOPS with a parallel efficiency of 84.6% and computational efficiency of 48.3%.
  • LIU Yaxue, YANG Xiaobao, LIU Yuan, XI Xiaoqiang
    Computer Engineering. 2020, 46(9): 44-53. https://doi.org/10.19678/j.issn.1000-3428.0056795
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    It is a new trend of society development to realize cross-industry and cross-platform resource integration,the integrated multi-application certificate management systems should be able to provide identity authentication for multiple industries.However,the single point of failure of traditional centralized Public Key Infrastructure(PKI) authentication systems pose a systematic threat to industries and users.To address the security authentication problem of multiple industries,this paper uses the decentralized and tamper-resistant blockchain technology to construct a multi-application certificate system model,BMCS.The model establishes a cross-industry distributed trust structure in blockchain,and deploys multiple smart contracts on the BMCS blockchain network that has been authorized by multiple industries,so as to manage the certificate operations in industries.Also,the multi-application file system is used to realize the storage of multi-industry certificates on terminal devices.Experimental results show that BMCS can achieve the life-cycle management of multi-industry certificates and avoid the single point of failure in traditional authentication systems.It can ensure systematic security for the identity authentication of terminal devices in multiple industries,reduce the cost and improve the efficiency of certificate services.
  • Artificial Intelligence and Pattern Recognition
  • ZHANG Yi, ZHOU Wen, LIANG Yiwen, TAN Chengyu
    Computer Engineering. 2020, 46(9): 54-60. https://doi.org/10.19678/j.issn.1000-3428.0055380
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Dendritic Cell Algorithm(DCA) is an algorithm for simulating antigen presentation in the human immune system,which can divide input data into normal and abnormal data quickly and effectively.However,the existing DCA models are generally lack of clear formal description and their signal extraction is affected by artificial experience.To address the problems,this paper proposes a numerical differentiation-based functional dendritic cell model,named ndhDCA,by improving the hDCA model.In the preprocessing stage,the numerical differentiation method is introduced to extract the signal adaptively according to the trend of data change and to randomly and dynamically sample the antigen to remove the time-sensitive data sequence.On this basis,the input signal is fused to obtain the decision signal,and the antigen background environment is classified.ndhDCA,DCA and hDCA are compared on WBC and KDD99 data sets.The experimental results show that ndhDCA has higher accuracy and lower false positive rate in both ordered data sets and unordered data sets,and overcomes the sensitivity of the data sequence.
  • ZHANG Guoling, WANG Xiaodan, LI Rui, LAI Jie, XIANG Qian
    Computer Engineering. 2020, 46(9): 61-67. https://doi.org/10.19678/j.issn.1000-3428.0057060
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Extreme Learning Machine(ELM)randomly selects input weights and hidden-layer bias of network,which increases the complexity and reduces the robustness of network.To address the problem,this paper proposes an ELM algorithm based on stacked Denoising Sparse Auto-Encoder(sDSAE-ELM).By taking the advantage of sparse network of stacked Denoising Sparse Auto-Encoder(sDSAE),the deep features of target data are mined,and the input weight and hidden-layer bias are generated for ELM to obtain the hidden-layer output weight,and the training classifier is completed.Then sparsity constraints are added to optimize the network structure and improve the accuracy of algorithm classification.Experimental results show that the proposed algorithm has higher classification accuracy and stronger robustness than ELM,PCA-ELM,ELM-AE and DAE-ELM algorithms in processing of high-dimensional noisy data.
  • WEI Wenhao, TANG Zekun, LIU Gang
    Computer Engineering. 2020, 46(9): 68-75. https://doi.org/10.19678/j.issn.1000-3428.0055574
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The randomness of the initial center point selection of the K-means algorithm and its sensitivity to noise points make the clustering result easily fall into the local optimal solution.In order to obtain the best initial clustering center,this paper proposes a parallel bisecting K-means algorithm based on distance and density.The algorithm calculates the average distance between dataset samples.Based on the distance between the data points,the weight of the data is calculated,and the most heavily weighted data is chosen as the first center point.Also,data whose distance from the first center point is less than the average sample distance do not participate in the next round of clustering.The weights of the remaining data points are multiplied with the distance from the selected center point,and the data with the largest value is chosen as the next center point.After the two center points are obtained, the data are distributed according to the distance from them.The classes represented by each center point are divided into two categories,and on each category the above steps are repeated.The algorithm simulates the way of cell division to segment the data,and constructs a full binary tree.When the number of leaf nodes exceeds the number of categories,k,the clustering is stopped and k initial clustering centers are obtained by merging leaf nodes to execute the K-means algorithm. Test results on the UCI public dataset show that the proposed algorithm has higher efficiency and better clustering performance compared with the traditional K-means algorithm,Canopy-Kmeans algorithm,Bisecting K-means algorithm,WK-means algorithm,MWK-means algorithm and DCK-means algorithm.
  • CHEN Junyue, HAO Wenning, ZHANG Zixuan, TANG Xinde, KANG Ruizhi, MO Fei
    Computer Engineering. 2020, 46(9): 76-82. https://doi.org/10.19678/j.issn.1000-3428.0055313
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The existing sentence similarity algorithms fail to process synonyms and are faced with low accuracy and high complexity.To address the problems,this paper proposes a new sentence similarity algorithm for paraphrase identification by using the word embedding technique to improve the Levenshtein similarity algorithm and Jaccard index.Also,the advantages and disadvantages of the sentence similarity algorithms are briefly analyzed,and the application mode of multi-similarity feature combination is designed.Experimental results on MRPC paraphrase recognition data set show that the accuracy rate and F1 value of the paraphrase identification model using this algorithm are 74.4% and 83.1% respectively.Compared with the models using TF-IDF algorithm,Bag of Words(BoW) algorithm and other traditional algorithms,it has better recognition performance.
  • ZHUANG Zujiang, FANG Yu, LEI Jianchao, LIU Dongbo, WANG Haibin
    Computer Engineering. 2020, 46(9): 83-88,94. https://doi.org/10.19678/j.issn.1000-3428.0055311
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The way human beings process information mainly relies on billions of neurons that constitute a complex neural network,and the information is transmitted in the form of pulses.The STDP learning algorithm is used to construct a two-layer Spiking Neural Network(SNN) structure based on the LIF model,and a voting competition mechanism is proposed based on the improved classification layer algorithm.Through the competitive voting of neuron performance categories after multiple times of training,the performance of the network organization with the same number of neurons on the image classification problem is optimized.Results of experimental verification on the MNIST data set show that the accuracy rate of the voting competition mechanism reaches 98.1%,about 6%on average higher than that of the SNN under the same network scale without the voting competition mechanism.When the number of neurons is small,the voting competition mechanism can achieve the same training results as more complex network structures do without increasing the training time.
  • LI Jun, LÜ Xueqiang
    Computer Engineering. 2020, 46(9): 89-94. https://doi.org/10.19678/j.issn.1000-3428.0055368
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on the structural information of the document and the semantic information of external words,this paper proposes a keyword extraction method based on Bidirectional Encoder Representation from Transformer(BERT) word vectors and TextRank.Using network graph-based TextRank,this method introduces the semantic difference and uses BERT word vector weighting to optimize the calculation process of the transfer possibility matrix of TextRank.At the same time,the overall influence scores of words in the document are sorted by iteration,and the words with the TopN scores are selected as keywords.Experimental results show that when keywords are selected Top3,Top5,Top7 and Top10 words,the average F value of the proposed method is 2.5% higher than that of the keyword extraction method based on word vector clustering centroid and TextRank weighting.The proposed method can improve the efficiency of keyword extraction.
  • CAO Yukun, GUI Liai
    Computer Engineering. 2020, 46(9): 95-100,109. https://doi.org/10.19678/j.issn.1000-3428.0055152
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Application of existing Temporal Convolutional Network(TCN) to temporal sequence prediction results in a large amount of computation and redundant parameters,and therefore it is not applicable to mobile terminals with limited computing capabilities and storage space,including mobiles phones,tablets,and laptops.To address the problem,this paper proposes a Lightweight TCN(L-TCN).The network replaces the common convolution in TCN with depthwise separable convolution.The channel convolution is used to implement separation of common convolution on a spatial dimension,so as to broaden the network and extend the scope of feature extraction.Then the pointwise convolution is used to simplify the computation of common convolution operations.Experimental results show that compared with TCN,the proposed L-TCN can significantly reduce the number of parameters and the amount of calculation of network models while keeping the precision of the temporal sequence prediction,which demonstrates it is applicable to mobile terminals with limited computing capabilities and storage space.
  • ZHANG Yi, ZHAO Jieyu, WANG Chong, ZHENG Ye
    Computer Engineering. 2020, 46(9): 101-109. https://doi.org/10.19678/j.issn.1000-3428.0056808
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to enhance the temporal feature extraction ability of Temporal Convolutional Networks(TCNs),this paper proposes a multimodal gesture recognition method based on 3D Dense convolutional Networks(3D-DenseNets) and improved TCNs.3D-DenseNets are used in spatial analysis to effectively learn short-term temporal and spatial features,and TCNs are used to extract temporal features in temporal analysis.On this basis,the attention mechanism is introduced,and the time-domain compression-stimulation network is used to adjust the weight ratio of each TCN layer feature in the time dimension.The method is evaluated on two dynamic gesture data sets,VIVA and NVGesture.Experimental results show that the proposed method achieves an accuracy rate of 91.54% on VIVA and 86.37% on the benchmark of NVGesture,reaching a level similar to that of the latest MTUT method.
  • WANG Qingsong, ZHANG Heng, LI Fei
    Computer Engineering. 2020, 46(9): 110-116. https://doi.org/10.19678/j.issn.1000-3428.0055047
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing automatic summary generation methods for long texts cannot fully measure the similarity characteristics of sentences,which results in the decrease of the accuracy of summary generation.To address the problem,this paper proposes an automatic summary generation method based on graph integration model.The method calculates the word frequency,semantic features and syntactic features of text sentences,and then uses the naive Bayesian method to transform the fusion problem of multidimensional text features into a graph integration mode,which improves the calculation accuracy of similarity between sentences.On this basis,a text summary is generated by using TextRank algorithm.Experimental results show that compared with the traditional summary generation method based on the sequence-to-sequence model and summary extraction method based on multi-dimensional features of sentences,the proposed method achieves a higher ROUGE index.It can effectively synthesize the multidimensional features of sentences,and improves the accuracy of summary generation.
  • Cyberspace Security
  • HUANG Tinghui, DING Yong, LI Sijun
    Computer Engineering. 2020, 46(9): 117-122. https://doi.org/10.19678/j.issn.1000-3428.0055664
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the rapid deployment of Internet of Things(IoT) devices on the open Internet,the privacy and communication security of IoT devices has attracted much attention.Because the embedded IoT devices are limited by resources and computing performance,traditional network communication security methods have been unable to provide reliable security guarantee.Therefore,this paper proposes a lightweight IPv6 address hopping protocol,L6HOP,in the low-power wireless personal area network,6LoWPAN.The protocol improves the Moving Target IPv6 Defense(MT6D) protocol,and uses a lightweight hash algorithm to reduce CPU computing consumption.Also,the sliding address window is introduced to solve the high packet loss rate caused by clock errors of different devices.Experimental results show that the L6HOP protocol can effectively protect the IoT from device tracking,DoS and eavesdropping attacks.Compared with MT6D protocol,it can significantly reduce the computing overhead of CPU and packet loss rate of communication.
  • DENG Zhihui, WANG Shaohui, WANG Ping
    Computer Engineering. 2020, 46(9): 123-128,135. https://doi.org/10.19678/j.issn.1000-3428.0056028
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Searchable encryption,as the core technology of secure search,enables data storage servers to retrieve data under ciphertext.However,the existing searchable encryption scheme without secure channel fail to resist off-line keyword guessing attacks initiated by external attackers.In order to solve the problem,this paper analyzes the security of the searchable encryption scheme based on composite-order bilinear pairs,and proves that the existing scheme does not consider the indistinguishability of keyword trapdoor.Also,this paper redesigns the Trapdoor algorithm and proposes an improved searchable public key encryption scheme without secure channel,which proves to be able to resist external keyword guessing attacks.Analysis results show that the proposed scheme has good ciphertext and trapdoor size,its computational complexity is close to that of the original scheme,but its security performance is better.
  • YANG Xiaodong, CHEN Guilan, LI Ting, LIU Rui, ZHAO Xiaobin
    Computer Engineering. 2020, 46(9): 129-135. https://doi.org/10.19678/j.issn.1000-3428.0056080
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Searchable encryption technology has broad application prospects in cloud storage environment,which can protect the confidentiality and privacy of cloud data.However,existing searchable encryption schemes face problems such as excessive computational overhead,low security,and lack of support for multi-user ciphertext retrieval.In order to solve these problems,a multi-user ciphertext retrieval scheme based on certificateless cryptosystem is proposed.The user's final private key consists of part of the private key and secret value,which effectively solves the certificate management problem of the traditional cryptosystem and the key escrow problem based on the identity cryptosystem.In addition,the data owner does not need to specify the identity of the accessing user when encrypting the keyword.The scheme supports ciphertext retrieval by multiple users,and implements functions such as joining and revoking access users through an authorization list.The analysis results show that the scheme satisfies the indistinguishability of ciphertext index and the indistinguishability of trapdoors.Compared with similar schemes,it has higher computational performance in terms of keyword encryption,trapdoor generation and keyword retrieval.
  • LIU Feifei, WU Zhongdong, DING Longbin, ZHANG Kai
    Computer Engineering. 2020, 46(9): 136-142,148. https://doi.org/10.19678/j.issn.1000-3428.0055752
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the security threats in the communication between computer network and the Advanced Metering Infrastructure(AMI) for smart grid,this paper proposes an improved intrusion detection algorithm for AMI based on DBN-OS-RKELM.This algorithm uses Deep Belief Network(DBN) to extract the main features of collected historical network log data,and presents the high-dimensional data in a low-dimensional form during the feature learning to reduce redundant features.Then the newly arrived network log data is added to DBN-OS-RKELM to update the output weight in real time,so as to complete the classification of intrusion detection for AMI.Experimental results show that compared with intrusion detection algorithms based on Extreme Learning Machine(ELM),Online Sequential Extreme Learning Machine(OS-ELM) and so on,the proposed intrusion detection algorithm based on DBN-OS-RKELM has a better generalization ability and faster learning rate,and improves the accuracy of intrusion detection.
  • ZHAO Liang, LI Lei, LI Xiangli
    Computer Engineering. 2020, 46(9): 143-148. https://doi.org/10.19678/j.issn.1000-3428.0055661
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem that Web application servers are vulnerable to replay attacks,this paper proposes a defense scheme based on double sequence function for Web servers.The sequence function and periodic function are used to generate the encryption verification parameters in the identity verification stage and the session stage respectively,and the bidirectional authentication is carried out through the sequence functions with the same structure defined on both sides.The parameters are updated in the progressive way of sequence value,so as to filter messages of replay attacks and ensure the reliability and freshness of the request.Analysis results show that the scheme can avoid the influence of network delay and has good ability to resist replay attacks.
  • PAN Shesu, ZHANG Jijun, ZHANG Zhaofeng
    Computer Engineering. 2020, 46(9): 149-153,162. https://doi.org/10.19678/j.issn.1000-3428.0055974
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Physical Unclonable Function(PUF) improves the security of the RFID communication system because of its unpredictable and unclonable characteristics.However,the stability of PUF response brings great challenges to tags with limited resources and computing power.In order to solve the problem,this paper proposes a pre-selection method based on conditional probability using the property that some unstable elements of SRAM are adjacent,and designs a SRAM PUF stability processing scheme based on the reverse fuzzy extractor.The key can still be generated stably when the computational power and area are small.Experimental results show that on condition of average error rate of 0.14,the proposed method only needs 686 SRAM PUF units to generate a 64 bit key with a failure rate of 4.5×10-5.
  • WANG Hui, ZHAO Ya, ZHANG Juan, LIU Kun
    Computer Engineering. 2020, 46(9): 154-162. https://doi.org/10.19678/j.issn.1000-3428.0055651
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to accurately predict network attack paths,this paper proposes an attack path prediction method based on Probabilistic Attribute Network Attack Graph(PANAG).The method uses the common vulnerability scoring system to analyze the vulnerability attributes,and designs a Node Vulnerability Clustering(NVC) algorithm to reduce the number of vulnerabilities.Also,the probability attribute network attack graph generation algorithm,GeneratNAG,is given to avoid the possible state explosion of generated attack graphs.Then a comprehensive analysis of factors that influence the feasibility of cyberattacks is made,and on this basis the concept of attack value is introduced.A path generation algorithm based on attack value,BuildNAP,is proposed to eliminate redundant paths.Finally,the PANAG model is used to quantitatively analyze the possibility of different intrusion paths based on intrusion intent,and predict the attack path that the attacker is most likely to take.Experimental results demonstrate the accuracy and execution efficiency of the proposed method.
  • LIU Yanan, ZHANG Zheng, QIU Shuo, CHENG Yuan
    Computer Engineering. 2020, 46(9): 163-171. https://doi.org/10.19678/j.issn.1000-3428.0057548
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The prerequisite for solving the problem of secure communication in Wireless Sensor Network(WSN) is lightweight authentication and key distribution.However,due to the limited computing,storage and communication resources of sensor nodes,the traditional authentication and key distribution mechanism based on the Public Key Infrastructure(PKI) is not suitable.Therefore,this paper proposes an intra-cluster key distribution scheme based on Physical Unclonable Function(PUF) in WSN to realize bidirectional authentication and key distribution between gateway nodes and sensor nodes in the cluster.The unclonable and unpredictable properties of PUF are used to provide more secure and efficient bidirectional authentication,implementing 100% secure connectivity in the cluster through direct and indirect key distribution.Since keys are not pre-stored,the scheme reduces the cost of storage and the risk of key leakage of nodes,providing the perfect anti-capture performance.Besides,the stimulus response pairs are not transmitted in clear text,and can resist the modeling attacks to the PUF.Experimental results show that the proposed scheme provides better anti-capture performance,secure connectivity and authentication for nodes than the probability key pre-distribution schemes under the same storage overhead.
  • Mobile Internet and Communication Technology
  • RUI Xiongli, CAO Xuehong
    Computer Engineering. 2020, 46(9): 172-177. https://doi.org/10.19678/j.issn.1000-3428.0056087
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the traditional three-node cooperative communication model,relay nodes forward data for free for source nodes,which reduces the spectral efficiency.To address the problem,this paper proposes a cooperative transmission mode based on the Decode-and-Forward(DF) cooperative protocol using superposition coding,and analyze its transmission rate and outage risk.In order to improve the cooperative transmission rate of the system,the mode obtains Channel Status Information(CSI) through the exchanges of control information in the link layer,and uses superposition coding at the Relay nodes to implement superposition transmission of the relay node information and source node information.Simulation results show that compared with the traditional three-node cooperative transmission mode,the proposed mode can effectively improve the transmission rate of the system.
  • LI Cuiran, LI Ang
    Computer Engineering. 2020, 46(9): 178-185. https://doi.org/10.19678/j.issn.1000-3428.0055733
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The imbalance between the topological structure of linear Wireless Sensor Network(WSN) and the energy consumption of nodes often results in the energy hole problem.To this end,this paper constructs an energy supply model of nodes with the solar energy collection function.According to the energy consumption of nodes during information transmission,an energy efficient routing algorithm based on even node clustering is proposed.Considering that the supply of solar energy is random,time-variant and prone to be influenced by weather,this paper discusses the power variation trend of collected solar energy of nodes and the features of solar energy supply in sunny and cloudy weather respectively.Then four kinds of different node transmission thresholds are set,and the influences of their different values on the life cycle,residual energy,and total number of data packet transmissions of the network are analyzed.Simulation results show that compared with the single-hop transmission routing algorithm,the proposed algorithm can effectively balance the energy consumption between nodes and extend the network life cycle.
  • ZHANG Pengfei, ZHANG Yuexia
    Computer Engineering. 2020, 46(9): 186-192. https://doi.org/10.19678/j.issn.1000-3428.0057204
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem of serious signal interference and excessive power consumption in User-Centric Ultra Dense Network(UUDN),this paper proposes a Two-Layer Stackelberg Game Power Control(TSGPC) algorithm.The model of UUDN uplink power control system is established and the TSGPC algorithm is used to set the appropriate revenue functions for service users and cooperative users.The Nash equilibrium solution of the optimal transmit power and the best punishment factor of cooperative users are derived theoretically,so that the benefits of all users can be maximized.At the same time,the existence and uniqueness of the Nash equilibrium solution are proved and the effectiveness of TSGPC algorithm is verified.Simulation results show that, on the premise of ensuring the communication quality,the proposed algorithm significantly improves the Signal to Interference plus Noise Ratio(SINR) of cooperative users compared with SGUPPC,PCBSW and other algorithms,and increases the system throughput by 5.58% compared with the Nash algorithm.The algorithm significantly reduces the interference between UUDN users,and significantly improves the system throughput and capacity.
  • WEI Zihui, ZHANG Yaofa, ZHAO Jixun, XIE Yunlong, LI Xiaoting, FANG Lide
    Computer Engineering. 2020, 46(9): 193-197,204. https://doi.org/10.19678/j.issn.1000-3428.0056130
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Taylor algorithm for TDOA-based three-dimensional positioning is affected by external conditions,which leads to a large-scale solution failure.To address the problem,a simulation software for the TDOA-based three-dimensional positioning algorithm is developed to carry out simulation for the application of the Taylor algorithm in TDOA-based three-dimensional positioning,and the environmental restrictions of the Taylor algorithm in the TDOA-based three-dimensional positioning application are found.In order to reduce the unnecessary impact of the external environment on the positioning performance and avoid the large-area non-convergence in the positioning results,the LM algorithm is proposed as a solution to TDOA-based three-dimensional positioning.Results of simulation show that the LM algorithm overcomes the limitations of the Taylor algorithm in the use of conditions,improves the positioning accuracy while ensuring the result convergence,and has robustness.The feasibility of the LM algorithm is verified as a solution to TDOA-based three-dimensional positioning.
  • RAN Chao, FANG Zhijun, ZHANG Yanyu
    Computer Engineering. 2020, 46(9): 198-204. https://doi.org/10.19678/j.issn.1000-3428.0055192
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the man-made shortage of electromagnetic spectrum resources,a general Software Defined Radio(SDR) system is established as a communication platform,and an improved dual threshold energy detection algorithm is proposed.The algorithm adds additional thresholds into the confusion region in order to refine the decision result,and then performs the fusion decision,which reduces the waste of sensor information caused by the traditional algorithm and reduces the noise impact under the real channel.In the SDR system,real time detection of the frequency band usage of authorized users is carried out,which realizes spectrum sensing and provides a basis for the spectrum access of the secondary user.Experimental results show that compared with the single threshold energy detection and the traditional double threshold energy detection method,the proposed algorithm has a higher detection probability in cases of a low Signal-to-Noise Ratio(SNR).
  • HU Lili, XU Yan, TAO Huiqing
    Computer Engineering. 2020, 46(9): 205-212. https://doi.org/10.19678/j.issn.1000-3428.0056042
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    LTE-R wireless communication system can provide data transmission support for railway communication network and realize the stable operation of the train,but at present there is little analysis done on its reliability and failure dynamic characteristics.In view of this situation,this paper proposes a reliability analysis method of LTE-R system based on Dynamic Fault Tree(DFT).By analyzing the influence of network structure and Quality of Service(QoS) indexes on train operation,the definition of LTE-R reliability characteristic quantities are given,and the DFT analysis models are established.The reliability indexes of the three interweaved redundant structures,including of single network,dual network and Radio Remote Unit(RRU) are calculated by Markov method and Binary Decision Diagram(BDD) method respectively.Analysis results show that the reliability of dual network interleaved redundant structure is the highest,the steady-state effectiveness reaches 99.999 86%,and that of single network interleaved redundant structure is the lowest,the steady-state effectiveness is 99.993 69%.
  • Graphics and Image Processing
  • JI Xiuyi, LI Jianhua
    Computer Engineering. 2020, 46(9): 213-220. https://doi.org/10.19678/j.issn.1000-3428.0055881
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of the existing chemical structure image recognition methods based on traditional image processing techniques and pipeline methods usually rely on artificially designed features,resulting in a low recognition accuracy.To solve the problem,this paper proposes a chemical structure image recognition method based on spatial attention mechanism and channel attention mechanism.The method simplifies the recognition of chemical structure to a sequence generation task,and adopts a deep neural network model combining Convolutional Neural Network(CNN) and Long Short-Term Memory(LSTM) network to implement the transformation from chemical structure images to the SMILES sequence.The deep neural network model is composed of the encoder and the decoder.The encoder uses CNN to extract features of chemical structure images,and the decoder combines the two attention mechanisms with LSTM to generate SMILES sequences.Experimental results show that the proposed method improves the recognition accuracy to 81.63% and the BLEU-4 value to 0.937 under the condition that Beam Size equals 3,outperforming the chemical structure image recognition methods without attention mechanism or with a single attention mechanism.
  • XIAO Jingwei, TIAN Junwei, WANG Qin, CHENG Xixi, WANG Jia
    Computer Engineering. 2020, 46(9): 221-225. https://doi.org/10.19678/j.issn.1000-3428.0056094
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The number of layers and parameters of traditional residual networks are redundant in the practical application of fruit disease classification,and the original loss function is easy to misidentify similar disease.In order to solve the problem of redundant parameters and low discrimination of similar samples in fruit classification,this paper proposes an improved residual network structure to reduce the number of residual blocks and convolution kernels to reduce the parameters of the convolution layer.At the same time,the inter-class similarity penalty term is added into the original loss function to widen the distance between different classes,so as to improve the classification accuracy of diseases.Experimental results show that compared with the original residual network,the improved residual network reduces the amount of parameters by about 25%,and the recognition accuracy of the improved loss function reaches 92.76%.
  • CHEN Ze, YE Xueyi, QIAN Dingwei, WEI Yangyang
    Computer Engineering. 2020, 46(9): 226-232,241. https://doi.org/10.19678/j.issn.1000-3428.0055817
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the accuracy of small-scale pedestrian detection,this paper proposes a target detection method based on improved Faster R-CNN.The network structure uses a new aligned pooling layer based on bilinear interpolation to avoid the positional deviation caused by two quantization operations in Region of Interest(ROI)pooling.Then a cascade-based multi-layer feature fusion strategy is designed,which concatenates shallow feature maps with rich detail information and deep feature maps with abstract semantic information to address the insufficiency of feature information of small-scale pedestrians in deep feature maps.Experimental results on INRIA and PASCAL VOC2012 datasets show that the proposed method increases the mean Average Precision(mAP) by 17.58% and 23.78% respectively compared with detection method based on Faster R-CNN with the same efficiency of small-scale pedestrian detection.
  • LIU Yang, HUANG Darong, LIU Yang, ZHONG Wei
    Computer Engineering. 2020, 46(9): 233-241. https://doi.org/10.19678/j.issn.1000-3428.0057802
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on a constructed traffic sign color matrix,the analysis of color standardization mappings is carried out,and the mapping target color set of traffic sign color standardization is determined.Then the statistical data of RGB componentsin triangle center area and the ring area of the traffic sign is analyzed,and on this basis,a coarse image classification method based on the statistical data of regional RGB is proposed to implement parameter setting and brightness enhancement.Also,based on the statistical data of brightness of the rectangular center area,this paper proposes a determination method of image backlight.After image preprocessing,based on YIQ color space and HSV color space,four classifiers are cascaded,which are black-white color classifier based on Y value and S value binarization,red-green-blue-yellow classifier based on the variable interval division of H value,red-brown classifier based on YIQ space and black-white color compensation classifier based on coarse image classification to implement color standardization.Finally,the test data set is constructed based on the Chinese traffic sign detection data set,and experimental results show that the proposed color standardization method can realize traffic sign color standardization mapping at a high success rate.
  • HUANG Wei, FENG Jingjing, HUANG Yao
    Computer Engineering. 2020, 46(9): 242-247,253. https://doi.org/10.19678/j.issn.1000-3428.0055740
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When applied in the super-resolution reconstruction of a single image,the structure of Convolutional Neural Network(CNN) is shallow,and the number of features that can be extracted is reduced,which weakens the reconstruction performance of details.To address the problem,this paper proposes a super-resolution reconstruction algorithm for images based on extremely deep CNN using multi-channel input.Three interpolation and three sharpening preprocessing operations are performed on the original low-resolution image respectively,and the multi-channel image is used as data of the input layer of CNN.At the same time,the size of the convolution kernel is readjusted to deepen the network structure,so that the data of the input layer is trained in an extremely deep CNN model to reconstruct high-resolution images.Experimental results show that compared with Bicubic,SRCNN,MC-SRCNN and other algorithms,this algorithm has a better Peak Signal-to-Noise Ratio(PSNR) and visual effects.
  • JI Bin, REN Jianjun, ZHENG Xiujuan, TAN Cong, JI Rong, ZHAO Yu, LIU Kai
    Computer Engineering. 2020, 46(9): 248-253. https://doi.org/10.19678/j.issn.1000-3428.0056011
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Laryngealleukoplakia is a kind of precancerous tissue lesion.Accurate detection of the lesion is very important for the prevention and treatment of cancer.However,the edge of the lesion in laryngoscope images which is blurred and the surface reflection make it difficult to segment.Therefore,this paper proposes a U-Net-based multi-scale recurrent Convolutional Neural Network(CNN)(MRU-Net) to segment laryngeal leukoplakia lesion.The network uses Contrast-Limited Adaptive Histogram Equalization(CLAHE) technology to enhance laryngoscope images,and the image pyramid is constructed by average pooling as the multi-scale input of U-shaped network.At the same time,the multi-scale convolution and Recursive Convolution Layer (RCL) are used to replace the convolution layer of coding and decoding units to improve the network structure.The multi-scale output layer is used to generate different scales of feature maps,and each layer is averaged to generate the final result.Experimental results show that the F1 value,Jaccard Similarity(JS) and Mean Intersection over Union(MIoU) of MRU-Net were 0.784 3,0.661 1 and 0.826 9,respectively.Compared with traditional medical image segmentation methods such as U-Net and M-Net,the proposed network is more accurate in segmentation of laryngeal leukoplakia lesion,and the precision of obtained lesion contour is higher.
  • ZHANG Jing, CHEN Qingkui
    Computer Engineering. 2020, 46(9): 254-260,267. https://doi.org/10.19678/j.issn.1000-3428.0055701
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The analysis of crowd congestion degree is very important to maintain public safety.Generally,in a narrow space the perspective is limited,and the human-human occlusion and human-item occlusion are serious.In addition,because of the different scales of people and uneven density,the traditional methods often fail to directly get the specific number of people in a narrow space.To address the problem,this paper proposes an analysis method of crowd congestion degree in a narrow space based on the attention mechanism in order to quantify the crowd.The method analyzes the congestion degree in the current space through the regression congestion rate of the Convolutional Neural Network(CNN).It designs an attention module as the front end of the network,distinguishes the background and the crowd by generating attention maps of corresponding scales,and retains accurate pixel position information to reduce the impact of various noises in the input image.The attention graph and the original image are multiplied by corresponding pixels and injected into the fine-tuned ResNet to train the crowd congestion rate.Experimental results show that the proposed method can predict the congestion rate,accurately reflect the current crowd congestion degree,and realize crowd flow control.
  • TAN Lei, SUN Huaijiang
    Computer Engineering. 2020, 46(9): 261-267. https://doi.org/10.19678/j.issn.1000-3428.0055895
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of existing semantic segmentation models apply receptive fields of a single size in each convolution layer,which deters the models from extracting multi-scale features.To address the problem,this paper implements selective kernel convolution to build a novel residual module,Selective-Kernel-Array-Shuffle(SKAS).The selective kernel convolution can obtain the multi-scale information by adjusting the size of the receptive field.Also,a layer-wise grouped convolution method is proposed to build a lightweight network structure,SKASNet.The number of groups vary in continuous SKAS blocks in order to reduce the number of network parameters in a relatively smooth way and enhance the exchanges of information between different groups.Experimental results on the Cityscapes dataset show that the proposed network model has only 1.7 M parameters,and the segmentation accuracy reaches 68.5%.Compared with SegNet,ICNet,PSPNet and other models,the proposed model can achieve excellent segmentation performance while the number of network parameters is greatly reduced.
  • SHENG Long, MA Jianfei, YANG Ruixin, WU Di
    Computer Engineering. 2020, 46(9): 268-273. https://doi.org/10.19678/j.issn.1000-3428.0055648
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem that deep learning relies too much on labeled data in image recognition applications,this paper proposes a Convolutional Neural Network(CNN) image classification algorithm based on feature exchange.By combining the feature extraction method of CNN with the pixel position prediction function of full convolutional neural network,the feature map extracted from the convolution layer of CNN is exchanged with the similar label feature map,so the limited image features are fully fused to solve the lack of samples in image recognition.Experimental results show that the proposed algorithm can reduce the dependence on labeled data and significantly improve the recognition accuracy of network.It is appropriate for the image classification scenarios where data cannot be obtained in large quantities.
  • Development Research and Engineering Application
  • DONG Yi, LIU Jingfa, LIU Wenjie
    Computer Engineering. 2020, 46(9): 274-282. https://doi.org/10.19678/j.issn.1000-3428.0055967
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The traditional search engine based on keyword matching retrieval often fail to ensure the completeness and accuracy of the scraped data,but using the focused crawler method based on semantic retrieval tend to deviate from the focuse and fall into local optimum.To solve the problem,this paper proposes a focused crawler method based on multi-objective ant colony optimization algorithm.The method constructs a focused crawler domain ontology and focused vector.Then whether the link is relevant to the focuse is determined based on the anchor text relevance of the link,the focused relevance of the Web page where the link is located and the focused relevance of the page pointed to by the link.The multi-objective optimization model for the focused relevance degree of the link is established.The ant colony algorithm based on multi-objective optimization is introduced into the link selection proess of the focused crawler,and the non-dominated sorting and the Nearest and Farthest Candidate Solution(NFCS) is adopted to select the Pareto optimal link in order to guide the search direction of the focused crawler and improve the global search performance.Experimental results show that compared with FCSA,WSE and other traditional focused crawler methods,the proposed method improves the completeness of scraped data and can capture the Web pages with high relevance to the focuse more quickly.
  • ZHANG Kai, ZHOU Deyun, YANG Zhen, PAN Qian
    Computer Engineering. 2020, 46(9): 283-291,297. https://doi.org/10.19678/j.issn.1000-3428.0055544
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the real-time problem of Weapon Target Assignment(WTA),this paper establishes a mathematical model of WTA based on the division of fire set,and proposes a fast decision making algorithm of Neighborhood Search based on Fuzzy Adaptive Resonance Theory(FART-NS).The fast generalization ability of Fuzzy Adaptive Resonance Theory(FART) is used to improve the real-time performance of the algorithm.The virtual node is introduced to improve the optimization ability of Neighborhood Search(NS) algorithm in WTA solution space.A closed-loop mechanism of fast generalization neighborhood optimization online learning is formed,which makes the FART-NS algorithm robust to training set accuracy and sampling density.Simulation results show that the FART-NS algorithm is better than the mainstream algorithms such as BBA and improved GA in time complexity,and it can balance the real-time performance and convergence of WTA problem.
  • XIN Weiyao, LI Jian, WANG Xiaoliang, LI Yujian
    Computer Engineering. 2020, 46(9): 292-297. https://doi.org/10.19678/j.issn.1000-3428.0056088
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem that the energy focus in the underground energy field focusing model cannot be effectively identified,this paper proposes a method for positioning of underground shallow hypocenter on the basis of deep learning.The method uses inverse time amplitude superposition to reconstruct the vibration data obtained by the sensor array into a sample sequence of massive three-dimensional energy field images,which is then used as the input data of the deep learning network.The 3D-CNN model is used to build a deep learning network framework.The coordinates of known hypocenters are used as input labels during the preliminary training,and the above obtained data and labels are input into the network for training and testing,so as to form a learning model that maps the three-dimensional energy field to the hypocenter coordinates.Then the model outputs the coordinates of focal points,that is,the hypocenter coordinates.Results of the experiments show that the proposed method can effectively identify the focal points of the energy field,and can be applied to the field of positioning of underground shallow hypocenter.
  • MA Liang, XU Gang
    Computer Engineering. 2020, 46(9): 298-305,312. https://doi.org/10.19678/j.issn.1000-3428.0056137
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem of voltage deviation and uneven distribution of reactive power in island microgrid caused by traditional droop control and line impedance mismatch,this paper proposes a distributed layered control strategy based on self-triggered consistency algorithm.In the secondary control layer of microgrid,the global average estimator of voltage and reactive power is constructed by using the consistency algorithm,and the deviation of voltage and reactive power in the primary control layer are adjusted based on the quantity of compensation.Considering that network intruders perform Denial of Service(DoS) attacks on communication network at the secondary control layer under Cyber-Physical Systems(CPS) environments and block the information exchanges between agents,which leads to deterioration in the performance of the consistency algorithm,this paper gives a consistency algorithm based on ternary control with self-triggered communication strategy.By introducing an attack detection function to overcome the constraints on the frequency of DoS attacks,the strategy achieves on-demand communication while enhancing the robustness against DoS attacks.Simulation results show that the proposed control strategy can recover the voltage deviation and guarantee the even distribution of reactive power in microgrid,and ensure the convergence performance of the consistency algorithm under DoS attacks.
  • TANG Hao, LIU Baisong, LIU Xiaoling, HUANG Weiming
    Computer Engineering. 2020, 46(9): 306-312. https://doi.org/10.19678/j.issn.1000-3428.0056603
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Paper recommendation methods based on collaborative filtering suffer from data sparsity when processing massive data.To address the problem,this paper proposes a paper recommendation method based on representation learning of Knowledge Graph(KG).The method constructs a collaborative KG on the basis of open knowledge databases and records of user-paper interactions.Next,an translation-based representation learning algorithm for KG is used to map the user and paper nodes into the representation of low-dimensional dense vectors.Then an attention mechanism for text and structural information is introduced to model the reading preferences of users,and the aggregate function is used to fuse the feature representation of neighborhood nodes of users.Finally,the final recommendation list is obtained based on the correlation scores of users and papers calculated by iteratively using the Multi-Layer Perceptron(MLP).Experimental results on CiteULike-a show that the proposed method outperforms the traditional recommendation methods based on Collaborative Filtering(CF),Content-Based Filtering(CBF) and KG,effectively mining the potential semantic correlations between papers and improving the quality of paper recommendation.
  • CUI Kunkun, FAN Shaosheng
    Computer Engineering. 2020, 46(9): 313-320. https://doi.org/10.19678/j.issn.1000-3428.0055581
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problems of low navigation accuracy and poor robustness of inspection point recognition of substation inspection robots,this paper proposes a visual navigation and path feature recognition method based on dynamic double windows.According to the navigation image matching results and camera pose deviation,the navigation window is dynamically set,and the color space of the image is transformed from the traditional Red,Green and Blue(RGB) color space into the Hue,Saturation and Value(HSV) color space for gray image reconstruction.The navigation path is extracted by using the partition adaptive threshold segmentation algorithm and simplified into a linear model.The distance deviation between the robot and the navigation path is calculated by the least square method.At the same time,the full field of view is used as the feature recognition window,and the Faster R-CNN algorithm based on region recommendation is improved according to the length-width ratio of the path.Finally,the features of five kinds of path are recognized.Experimental results show that under strong light and weak light conditions,the deviation of linear tracking and curve tracking of inspection robots obtained by the proposed method is less than 5 mm and 25 mm respectively,and the average recognition accuracy of features of five kinds of path reaches 98.6%.Compared with the traditional HOG+SVM target detection method,the proposed method effectively improves the navigation accuracy and robustness of path feature recognition.