Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 July 2020, Volume 46 Issue 7
    

  • Select all
    |
    Hot Topics and Reviews
  • WANG Haofen, DING Jun, HU Fanghuai, WANG Xin
    Computer Engineering. 2020, 46(7): 1-13. https://doi.org/10.19678/j.issn.1000-3428.0057869
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In recent years,knowledge graph and its related technologies have developed rapidly and have been widely used in various cognitive intelligence scenarios in industry.This paper gives a brief description of researches in knowledge graph,and on this basis introduces key technologies of knowledge graph in engineering applications.Next,the paper studies the typical application scenarios of industry knowledge graphs,the corresponding case studies supported by well-known industry knowledge graph platforms and relevant available tools in each phase of the life cycle of knowledge graphs.Then the paper analyzes requirements of constructing enterprise-level knowledge graph platforms and key problems in this process,describes construction method and process of construction.In view of the problems encountered in platform construction,this paper gives corresponding solutions of knowledge graph middle platform construction,and prospects the future development and challenges of knowledge graph.
  • ZHU Lanting, SUN Lijun, YAN Yang
    Computer Engineering. 2020, 46(7): 14-20,29. https://doi.org/10.19678/j.issn.1000-3428.0056348
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Vehicle Fog Computing(VFC) is an extended model of fog computing that combines fog computing with traditional in-vehicle networks to provide real-time response services for vehicle users.The combination of intelligent parking assistance and VFC can help vehicles obtain parking information resources and improve traffic conditions.However,how to enable vehicle users to efficiently obtain parking information remains to be an issue to be solved in VFC parking assistance.Therefore,this paper establishes a VFC parking assistance system model,and on this basis proposes a VFC parking assistance allocation strategy,RAFC,which uses reverse auction to encourage vehicle users and fog nodes to actively participate in resource allocation to obtain revenue.Theoretical analysis and experimental results show that the RAFC strategy can achieve a balance between personal rationality and budge.Compared with the random matching method,RAFC can improve the matching success rate and social utility while reducing the overhead for users.
  • ZHENG Weicheng, LI Xuewei, LIU Hongzhe, DAI Songyin
    Computer Engineering. 2020, 46(7): 21-29. https://doi.org/10.19678/j.issn.1000-3428.0055912
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To realize identification and warning of fatigue driving detection in complex driving environment,this paper proposes an algorithm for fatigue driving detection based on deep learning. The algorithm uses the MTCNN model based on the shuffle-channel concept to detect the facial images of drivers collected in real time by normal cameras.Then the PFLD deep learning model is used for facial keypoint detection to locate the eyes,the mouth and the head,so as to extract the feature parameters including the blinking rate,the extent to which the mouth opens,and the nodding frequency.Finally,based on the multi-feature fusion strategy,the fatigue state of the driver is obtained to implement effective alarming for fatigue driving.Experimental results show that false warning do not occur in fatigue driving warning generated by the proposed algorithm,which means the proposed algorithm has a high detection accuracy and robustness.
  • CHEN Liyan, RUI Tingxian, Lü Guangjin
    Computer Engineering. 2020, 46(7): 30-35. https://doi.org/10.19678/j.issn.1000-3428.0057140
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    For the construction of digital government,this paper proposes a privacy protection scheme for personal credit by combining smart contract and homomorphic encryption technology in blockchain.This scheme uses Paillier homomorphic encryption algorithm to set the blind reading permission of personal credit information access,so as to enable credit access users to create an automatic condition matching contract and make reasonable decisions when users cannot obtain the clear text of personal credit information.Under this scheme,the credit system cannot infer the access requirements of credit access users,so the privacy of personal credit information is protected from multiple perspectives.Analysis results show that the proposed scheme can effectively protect personal credit privacy with reduced running overhead and improved security.
  • WANG Jianhua, CHEN Yongle, ZHANG Zhuangzhuang, LIAN Xiaowei, CHEN Junjie
    Computer Engineering. 2020, 46(7): 36-42. https://doi.org/10.19678/j.issn.1000-3428.0057547
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    IP traceback is one of the main methods of attack group identification.Industrial Control System(ICS) need accurate IP traceback to improve their self-protection.However,existing IP traceback methods are costly and inefficient in identification of the group a malicious IP belongs to.To address the problem,by collecting and analyzing the honeypot data of ICS,this paper proposes a same origin attack analysis method based on ICS function code features,so as to find out the attack group with similar attack behavior and improve the efficiency and accuracy of IP traceback.This method uses coarse-grained statistical features and fine-grained sequence features of industrial control function codes to quantify the attack behavior.Then the two kinds of features are modeled by using coarse set and clustering model.On this basis,the same origin attacks in honeypot data are analyzed.Experimental results show that the proposed method can use threat intelligence to discover more than 10 malicious groups including shodan in honeypot data with a high accuracy and recall rate.
  • Artificial Intelligence and Pattern Recognition
  • WAN Meihan, XIONG Yun, ZHU Yangyong
    Computer Engineering. 2020, 46(7): 43-49. https://doi.org/10.19678/j.issn.1000-3428.0054805
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The rapid development of genome sequencing has led to the explosive growth of gene and genomic sequence data in biological databases,in which functions of a large number of genes still remain unknown.Therefore,this paper proposes a gene node representation learning method,HAGE,based on hierarchical attention mechanism in heterogeneous network to predict the function of genes.Firstly,a gene function-related heterogeneous network with node attributes is constructed.Then the hierarchical attention mechanism is used in network to enable each gene node to learn a node embedding vector,which can be used for subsequent tasks such as gene function prediction.Experimental results show that the proposed method has better performance than GraphSAGE,GAT and other methods.
  • ZHANG Xianyang, ZHU Xiaoyu, LIN Haoshen, LIU Gang, AN Xibin
    Computer Engineering. 2020, 46(7): 50-57. https://doi.org/10.19678/j.issn.1000-3428.0055074
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The trajectory prediction of warships requires high accuracy and real-time performance,but the high complexity of trajectory data features of warships causes the traditional prediction algorithms to be inaccurate and time-consuming,reducing prediction performance.To address the problem,this paper proposes a warship trajectory prediction algorithm based on Variational Autoencoder(VAE).The trajectory coordinate data set is transformed into a trajectory motion vector set,and the trajectory motion features are extracted and generated by using variational autoencoder.Also,in order to improve the prediction accuracy,the hidden space distribution of the variational autoencoding network is set to be mixture Gaussian distribution,which is closer to the features of real data distribution.Then the classification of trajectory features is accomplished in hidden space to implement end-to-end trajectory prediction.Simulation results show that compared with the traditional trajectory prediction algorithms,GMMTP and VAETP,the proposed algorithm can reduce the prediction error rate by 85.48% and 35.59% respectively.
  • ZHOU Shiyuan, WANG Yinglin
    Computer Engineering. 2020, 46(7): 58-64,71. https://doi.org/10.19678/j.issn.1000-3428.0054780
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To maximize the amount of information of generated summary,this paper proposes a multiple document summarization method based on the Cuckoo Search(CS) algorithm and multiple objective function.The method pre-processes data of multiple documents by using sentence segmentation,word segmentation,removal of stop words and word drying to transform the documents into a basic processed form of words.Then the score of information amount of pre-processed sentences is calculated to serve as the input of the CS algorithm.Based on the multiple objective function,the sentences including key information of original texts are generated to form the ultimate summarization.Results show that compared with multiple document summarization methods based on Particle Swarm Optimization(PSO) algorithm and Double-layer K Nearest Neighbor(DKNN) algorithm,the proposed summarization method maximizes the amount of information in the generated summary while keeping high readability and low redundancy.Its average accuracy rate on the DUC benchmark dataset reaches 0.99.
  • ZHANG Yijie, LI Peifeng, ZHU Qiaoming
    Computer Engineering. 2020, 46(7): 65-71. https://doi.org/10.19678/j.issn.1000-3428.0054800
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on the correlation between temporal relations and causal relations of events,this paper proposes a joint identification method using neural network.The method takes the identification of temporal relations as the main task,and that of causal relations as auxiliary task.On this basis,three types of joint identification models of sharing encoding layer,decoding layer and encoding-decoding layer in auxiliary tasks are designed to enable information sharing through the network layer of the main task model and the auxiliary task model.Then feature information of joint identification models is learnt.Experimental results show that the joint identification method can use the causal information between events to significantly improve the identification performance of temporal relations.Also,the joint identification model of sharing encoding-decoding layer in auxiliary tasks is more suitable for the joint identification of temporal and causal relations of events.
  • GUO Yu, CHEN Jinyong, ZHANG Xinyu, LI Liang, SUN Weiwei
    Computer Engineering. 2020, 46(7): 72-77,83. https://doi.org/10.19678/j.issn.1000-3428.0054309
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Trajectory clustering is an important step in spatio-temporal trajectory processing.Common trajectory clustering algorithms,such as TRACLUS algorithm,usually have high time complexity and are sensitive to input parameters,thus consuming a lot of time to find optimal parameters.In order to solve this problem,this paper improves the TRACLUS algorithm by using offline batch processing technology and OPTICS algorithm.This optimization reduces the sensitivity of input parameters and the time for trajectory clustering of multiple sets of parameters,so the workload of the manual parameter debugging is reduced.Experimental results show that the time efficiency of the algorithm has been greatly improved when the optimal parameters are unknown and multiple sets of parameters need to be tested.
  • MA Huifang, LI Miao, TONG Haibin, ZHAN Zijun
    Computer Engineering. 2020, 46(7): 78-83. https://doi.org/10.19678/j.issn.1000-3428.0054895
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on the wildcard patterns and the random walk algorithm with prior information,this paper proposes an improved keyword extraction algorithm.The algorithm uses wildcard constraint to capture the semantic information between words,and extracts the sequential pattern that satisfies the gap constraint and the one-time condition in order to calculate the pattern support degree.When the pattern support degree is not lower than the threshold of minimum support degree,the node association graph is established.The similarity between words in the Wikipedia knowledge base is taken as priori information,and random walks are performed on the association graph by using the PageRank algorithm based on priori information,until the ranking scores stabilize.The Top K words are selected as keywords.Experimental results show that the proposed method has higher extraction accuracy and stability than TextRank,GraphSum and other algorithms.
  • QIU Shaoming, YU Tao, DU Xiuli, CHEN Bo
    Computer Engineering. 2020, 46(7): 84-90,97. https://doi.org/10.19678/j.issn.1000-3428.0055070
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing community division algorithms lack diversity in the division method,and division results are not accurate.To address the problem,this paper proposes a community division algorithm,SM-CD,on the basis of similarity clustering of multiple attributes of nodes.The algorithm uses social network features to define the structure attributes of nodes and the attributes of oneself.By adjusting the weight of two kinds of attributes in network,the similarity matrix of network nodes is calculated.Then the nodes are divided into different communities according to similarity and modularity.Experimental results on the real network data from Zachary and Football show that SM-CD has a higher accuracy rate in community division than Newman,GN and other algorithms.
  • LI Guanyu, ZHANG Pengfei, JIA Caiyan
    Computer Engineering. 2020, 46(7): 91-97. https://doi.org/10.19678/j.issn.1000-3428.0054953
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In natural language processing tasks,the attention mechanism can be used to evaluate the importance of a word.On this basis,this paper proposes an attention-enhanced natural language reasoning model,aESIM.The model adds the word attention layer and the adaptive direction weight layer to the bidirectional LSTM network of the ESIM model,so as to learn the representation of words and sentences more effectively,and increase the modelling efficiency of local inference between premises and hypothetical texts.Experimental results on datasets of SNLI,MultiNLI and Quora show that,compared with ESIM,HBMP,SSE and other models,aESIM increases the accuracy rate by 0.5%~1%.
  • ZHANG Zhimin, CHAI Bianfang, LI Wenbin
    Computer Engineering. 2020, 46(7): 98-103,109. https://doi.org/10.19678/j.issn.1000-3428.0054158
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of attribute network embedding algorithms only consider the direct links between nodes when designing topology structure,not the indirect links or the common link ratio of different nodes,which leads to the inadequate extraction of the real network topology characteristics.To solve this problem,an attribute network embedding algorithm based on sparse auto-encoder,SAANE,is proposed.The second-level neighbor-to-common neighbor ratio is extracted according to the network topology.On this basis,the text attribute information of the node is fused,and the fused vector is trained to obtain the low-dimensional embedding vectors of the node by training the optimal sparse self-coding network.Results of clustering and classification experiments on five real networks show that,SAANE outperforms DeepWalk,Node2Ves,LINE and other five mainstream algorithms in terms of clustering performance,increasing the average NMI value by 5.83% and the average classification accuracy by 4.53%.
  • JIN Yazhou, ZHANG Zhengjun, YAN Zihan, WANG Yaping
    Computer Engineering. 2020, 46(7): 104-109. https://doi.org/10.19678/j.issn.1000-3428.0054652
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    For classification problems in multi-label learning,the algorithm adaptation methods that transform them into a ranking problem and rank the output labels according to their relevance to the examples have made great success.This paper proposes a multi-label learning algorithm based on the margin criterion,which optimizes the margin loss between the minimum output in the relevant label set of examples and the maximum output in the irrelevant label set of examples,so as to sort the labels.On this basis,in order to utilize all the label information,an improved optimized ranking algorithm for multi-label learning is proposed to respectively optimize the margin loss between the average output in the relevant label set and the maximum output in the irrelevant label set of examples,and the margin loss between the minimum output in the relevant label set and the average output in the irrelevant label set,so as to sort the labels.Then an improved sub-gradient Pegasos algorithm is used to learn the model parameters.Experimental results on four multi-label datasets show that the two improved algorithms achieves similar classification performance compared with ML-RBF,BP-MLL,and ML-KNN under HL,RL and other three different evaluation criteria.
  • Cyberspace Security
  • HU Tao, DIAN Songyi, JIANG Ronghua
    Computer Engineering. 2020, 46(7): 110-115. https://doi.org/10.19678/j.issn.1000-3428.0055589
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Hardware Trojans pose a huge threat to the reliability of integrated circuit chips.Therefore,this paper proposes a hardware Trojan detection method based on Principal Component Analysis(PCA) and Long Short-Term Memory(LSTM) neural network.The method uses PCA to extract the feature vector of current in bypass information,and the extracted feature vector is used to train the LSTM network classifier for hardware Trojan recognition.Experimental results show that the proposed method can effectively identify Trojans,and can detect hardware Trojans that occupy only 0.74% of the total circuit area.
  • HE Gaofeng, SI Yongrui, XU Bingfeng
    Computer Engineering. 2020, 46(7): 116-121,128. https://doi.org/10.19678/j.issn.1000-3428.0055613
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to distinguish malicious traffic generated by running malicious Android applications from normal traffic,this paper proposes a method for annotating malicious traffic of mobile Android applications.For encrypted network traffic,encryption detection is performed based on the port number and the value of byte entropy of the stream payload content.Then whether the encrypted traffic is abnormal is determined based on the server certificate and other content.At the same time,the malicious Android mobile applications are decompiled,and the program is used to control the flow chart to analyze whether the encrypted traffic involves sensitive operations,so as to annotate malicious encrypted traffic.Tests are performed on 300 repackaged types of malicious mobile applications.The comparison of the experimental results with the same benchmark value show that the proposed method detects 341 malicious encrypted traffic where only 28 are false alarms.The result is more accurate than that of annotation that does not use the proposed method,which reports 1 602 malicious encrypted traffic.
  • ZHAO Zongqu, HUANG Lijuan, FAN Tao, MA Shaoti
    Computer Engineering. 2020, 46(7): 122-128. https://doi.org/10.19678/j.issn.1000-3428.0055076
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To solve the problem that existing Authenticated Key Exchange(AKE) protocols have high computational complexity and cannot resist quantum attacks,this paper proposes an AKE protocol based on R-LWE problem on lattice.The KEM scheme constructed based on R-LWE problem is combined with the digital signature algorithm with the message recovery function to achieve authentication,and the Peikert-type error coordination mechanism is replaced by the encrypted construction method to obtain the random and uniform session key.Analysis results show that,compared with the protocol designed by BOS,et al.,the proposed protocol has lower computational complexity,significantly reduces traffic,and effectively resists quantum attacks.
  • MA Peng, WANG Zeyu, ZHONG Weidong, WANG Xu'an
    Computer Engineering. 2020, 46(7): 129-135,142. https://doi.org/10.19678/j.issn.1000-3428.0055560
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the Side Channel Attack(SCA),the purity of power data seriously affects the efficiency of power attacks and the accuracy of key cracking,so denoising methods including Wavelet Transform(WT) and Wavelet Packet Transform(WPT) are widely used in power consumption preprocessing.However,WT tends to ignore high-frequency information when characterizing data,and the noise reduction threshold of WPT is not universal.To solve the problems,this paper proposes a new denoising method for Correlation Power Attack(CPA),which combines Wavelet Packet Decomposition(WPD) with Singular Spectrum Analysis(SSA).WPT is used to decompose the power consumption data,SSA is used to process the low-frequency and high-frequency information,and power consumption information is extracted adaptively according to the distribution trend of singular entropy to improve data quality.Experimental results of the SM4 algorithm for selective plaintext attacks show that compared with the original wavelet packet denoising method,the proposed method can effectively improve the signal-to-noise ratio of power consumption data and the efficiency of CPA,and reduce the power consumption of key cracking.
  • XIN Wenqian, SUN Bing, LI Chao
    Computer Engineering. 2020, 46(7): 136-142. https://doi.org/10.19678/j.issn.1000-3428.0055499
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To analyze the current ability of LiCi algorithm to resist integral attacks,this paper uses the bit-based division property and the MILP search tool to search for the integral distinguisher of the LiCi algorithm.The obtained longest round of integral distinguisher is 12-round,and is used to perform 13 rounds of integral attacks that can recover 17-bit key information on the LiCi algorithm.The data complexity of the attack is about 263,the time complexity is about 2100 times of 16-round encryption,and the storage complexity is about 241.In order to obtain a longer round of attack results,a 10-round integral distinguisher is used for 6-round backward attacks,and a 16-round integral attack is performed on the LiCi algorithm.The data complexity of the attack is about 263.6,the time complexity is about 2173 times of 16-round encryption,and the storage complexity is about 2119.Experimental results of integral attacks show that the 13-round LiCi algorithm cannot resist integral attacks.
  • LI Chengxing, WANG Jun, XU Jingming
    Computer Engineering. 2020, 46(7): 143-149,158. https://doi.org/10.19678/j.issn.1000-3428.0054756
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The RPL routing protocol is a lightweight distance vector routing protocol in Internet of Things(IoT),which is vulnerable to Rank attacks,causing normal communication between nodes to be significantly affected by serious network packet loss.In order to detect and isolate the malicious Rank attack nodes in the RPL routing protocol,this paper proposes a security RPL routing protocol based on trust mechanism and Rank threshold,Sec-RPL,which introduces the detection and isolation technology of malicious nodes.Based on the fact that malicious attacks on nodes will lead to a decrease in the trust value,Sec-RPL filters the normal nodes and suspected malicious nodes preliminarily.Then the Rank values of suspected malicious nodes are compared with the threshold of Rank,and the nodes with a Rank value lower than the threshold are isolated as attack nodes to achieve optimal routing decisions.Simulation results show that the Sec-RPL routing protocol has excellent performance in the success rate of detection,packet loss rate,and false alarm rate.Also,it consumes fewer computing resources and has higher security than the OF0-RPL and original RPL routing protocol.
  • HAN Shuyan, Nurmamat Helil
    Computer Engineering. 2020, 46(7): 150-158. https://doi.org/10.19678/j.issn.1000-3428.0055413
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Hidden access structure is a secure operation of Ciphertext Policy Attribute Base Encryption(CP-ABE),which can effectively prevent the leakage of sensitive information.However,tree access structure used by existing CP-ABE schemes are either completely open or completely hidden,which results in poor policy confidentiality and a large amount of encryption and decryption computation.To address the problem,this paper proposes a CP-ABE scheme to selectively hide the tree access structure.The mutual information method is used to extract sensitive attribute features,filter and hide the attributes that contain the original attribute set information in the access structure,so that the selectively hidden structure has the same security as the fully hidden structure.At the same time,the decryption ability of users is judged at the lowest matching cost,so that the users without decryption ability give up decryption as early as possible.Analysis results show that compared with the schemes using open access or fully hidden access structure,the proposed scheme has higher security and less computation.
  • LI Feng, SHU Fei, LI Mingxuan, WANG Bin, YANG Huiting
    Computer Engineering. 2020, 46(7): 159-164. https://doi.org/10.19678/j.issn.1000-3428.0054943
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As a high-level form of malicious code,Remote Access Trojan(RAT) can be used to collect sensitive user information and even launch large-scale attacks through command control.To accurately detect RAT,this paper proposes a new deep learning-based method that combines static analysis with dynamic behavior analysis to extract file features.By taking advantage of the ability of deep learning to extract sample features layer by layer,this method constructs a sample classification model based on Recurrent Neural Network(RNN) to detect RAT in Linux.Further,in order to avoid being trapped in local optima,random search of parameters is adopted to select hyperparameter of the model.Experimental results show that compared with other models based on traditional machine learning algorithms,the proposed RNN-based sample classification model has higher accuracy and F1 value when selecting the hyperparameter configuration with the best performance.
  • Mobile Internet and Communication Technology
  • HAO Zhanjun, HOU Jiaojiao, DANG Xiaochao, QU Nanjiang
    Computer Engineering. 2020, 46(7): 165-172. https://doi.org/10.19678/j.issn.1000-3428.0054771
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the process of deploying a strip area in a Wireless Sensor Network(WSN),the amount of data forwarded by the cluster head nodes is inversely proportional to its distance from the base station,which tends to cause uneven network load.Therefore,this paper proposes an optimized node coverage method in WSN.The method establishes an energy consumption model for cluster head and sensor nodes.Then the sensor nodes are deployed and the strip areas are equidistantly clustered in the shape of diamond to implement effective coverage of WSN in strip area.Then according to the distance from the cluster head nodes to the base station,the number of cluster head nodes in each cluster is optimized,and the non-uniform deployment of cluster head nodes is adopted to balance network energy consumption in the strip area.Experimental results show that the proposed optimization method can improve the network utilization and extend the life cycle of network compared with the EECS and EDNU method.
  • LI Jianqi, HUANG Biyao, YANG Tingting, WU Yucheng
    Computer Engineering. 2020, 46(7): 173-178. https://doi.org/10.19678/j.issn.1000-3428.0055687
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to realize the fast and reliable recognition of wireless power communication frequency spectrum in the 230 MHz frequency band,this paper proposes a Cooperative Stepper Frequency Domain Energy Detection(CSFDED) algorithm.The algorithm performs 2 times of step energy detection in the frequency domain.The first time of step energy detection will quickly detect the entire frequency band with a larger step value,and the frequency band occupied in the detection will be subjected to the second time of step energy detection with a smaller step value.At the same time,key parameters such as monitoring frequency band and stepping value can be flexibly configured according to actual needs,and finally the usage of frequency points of the 230 MHz frequency band are obtained.The simulation results show that,compared with the traditional frequency domain energy detection algorithm,the proposed algorithm can obtain a higher detection probability under low Signal-to-Noise Ratio(SNR) conditions,and is suitable for real-time spectrum monitoring in the 230MHz power-specific frequency band.
  • LIU Zhiguo, SONG Guangyue, CAI Wenzhu, LIU Qingli
    Computer Engineering. 2020, 46(7): 179-184. https://doi.org/10.19678/j.issn.1000-3428.0055421
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the difficulty of frame delimitation of communication data in the form of bit stream in unknown network environment,this paper proposes a frame location method based on TextRank algorithm.The weight of nodes in bit stream is determined based on the occurrence frequency of sequence in data.Then the TextRank-based BitstreamRank algorithm is used to determine the key sequence in the data of unknown protocol,and based on key sequence,the bit stream is segmented to calculate the sequence similarity between segments of bit stream.Thus the frame head of unknown protocol data can be located.Simulation results show that the proposed method can quickly and effectively analyze the unknown network protocol data,and accurately locate the position of each frame in bit stream data at an accuracy of over 90%.
  • GAO Zhongxia, QIU Runhe
    Computer Engineering. 2020, 46(7): 185-191. https://doi.org/10.19678/j.issn.1000-3428.0055353
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to reduce network resource consumption and ensure green communication,this paper proposes a scheme to balance Spectrum Efficiency(SE) and Energy Efficiency(EE) of Decode and Forward(DF) Two-Way Relay Transmission(TWRT) and One-Way Relay Transmission(OWRT).The spectrum efficiency of DF-TWRT and DF-OWRT are increased by using the optimal power allocation method,and the expressions of SE and EE of TWRT and OWRT systems are obtained under the optimal power allocation.The relationship between SE and EE is analyzed,and the energy efficiency of the two systems is maximized by optimizing the total transmission power.Simulation results show that the SE and EE of the proposed scheme are higher than those of equal power allocation at the same data transmission rate.
  • DING Qingfeng, GAO Xinpeng, DENG Yuqian
    Computer Engineering. 2020, 46(7): 192-197,205. https://doi.org/10.19678/j.issn.1000-3428.0055237
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the beamforming gain of massive Multiple-Input Multiple-Output(MIMO) relay system and reduce the hardware cost of phase shifters and radio frequency links in hybrid precoding architecture,this paper proposes a hybrid precoding algorithm based on discrete orthogonal matching pursuit for relay networks.Aiming at maximizing the spectral efficiency of the system,the multi-node complex optimization problem is decoupled to reduce the complexity of the solution to the optimal hybrid precoding matrix.Then the solution to the hybrid precoding matrix of the relay node is transformed into a spatial sparse reconstruction problem.Finally,the discrete orthogonal matching pursuit algorithm is used to obtain the discretized joint solution to the analog precoding reception and transmission of relay.Simulation results show that compared with all digital precoding and infinite precision orthogonal matching pursuit algorithm,the proposed algorithm for relay networks reduces the quantified loss on the transmission end of the relay.Also,the spectral efficiency reached by using the low-precision phase shifter is close to that reached by using the full-precision phase shifter.
  • ZHANG Taijiang, LI Yongjun, ZHAO Shanghong
    Computer Engineering. 2020, 46(7): 198-205. https://doi.org/10.19678/j.issn.1000-3428.0055504
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Satellite network topology is temporally variant,causing frequent link interruption and end to end long delay between satellites.To address the problem,this paper constructs a double-layer network architecture for satellites on Geosynchronous Earth Orbit(GEO)/Low Earth Orbit(LEO),and implements layered and clustering design on GEO/LEO double-layer satellite network.On this basis,an optimized Temporally Ordered Routing Algorithm(TORA),HCR,is proposed.On the LEO layer,the HCR algorithm is used to build multiple loop-free paths from source satellites to target satellites.When network congestion takes place on the LEO layer,satellites on the GEO layer will be used for layered data transmission.Simulation results show that compared with the traditional Dijkstra Shortest Path(DSP) algorithm,the proposed HCR algorithm can effectively balance data traffic of satellite network,improving the reliability and flexibility of satellite network management.
  • LI Weiyong, WU Que, ZHANG Wei, CHEN Yunfang
    Computer Engineering. 2020, 46(7): 206-215. https://doi.org/10.19678/j.issn.1000-3428.0055506
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the limitation of dependence on OpenFlow protocol,this paper proposes a fast source routing forwarding scheme that applies Protocol-Oblivious Forwarding(POF) technologies to Software Defined-Wide Area Network(SD-WAN).The scheme takes advantage of high programmability of POF and seals the complete forwarding path by reorganizing packet header fields of source routing.It realizes the packet processing algorithm in switches through the flow instruction set provided by POF,and uses the technique of assembling line in the multi-stage flow table design.Then by abstracting and repeating the same matching logic,the scheme decreases the number of flow table items in the whole network and increases the forwarding efficiency and expansibility of the system.An experimental POF platform based on Mininet is built and tested by flooding traffic in the environment of single link topology and multicast tree topology.The experimental results show that POF technology can achieve better source routing forwarding performance when applied to SD-WAN.
  • YANG Lu, HUANG Junxi, LI Yuan
    Computer Engineering. 2020, 46(7): 216-221. https://doi.org/10.19678/j.issn.1000-3428.0055735
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Faster-Than-Nyquist(FTN) signaling can improve the system effectiveness without reducing the system reliability,but it will bring serious inter-symbol interference and have a serious impact on the equalization performance of the system.Although the single carrier equalization method based on MMSE-NP-RISIC can reduce the channel noise and residual symbol interference to some extent,it has the problem of error transfer,which leads to the decrease of equalization accuracy.To address the problem,this paper proposes an iterative MMSE-NP-RISIC equalization algorithm,which considers the decision error to iteratively update the coefficients of the predictor and RISI filter.Thus error transmission can be relieved and the influence of channel noise and residual inter-symbol interference on the system performance can be reduced.Simulation results show that compared with non-iterative MMSE-NP-RISIC equalization algorithms,the proposed algorithm decreases the Bit Error Rate(BER) while the system equalization performanceis improved.
  • Graphics and Image Processing
  • CHENG Xixi, ZHANG Yanling, TIAN Junwei
    Computer Engineering. 2020, 46(7): 222-227. https://doi.org/10.19678/j.issn.1000-3428.0055021
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Corner point detection is an important part of camera calibration,as its detection accuracy will directly affect the accuracy of camera calibration.Camera calibration can determine the internal and external parameters of the camera by performing corner point detection on checkerboard images,but traditional methods preserve more redundant information while detecting the corners of the board,and fail to give accurate position of the corner points of the board.To address the problem,this paper proposes a more efficient checkerboard corner point detection method.The checkerboard corner point is characterized by the fact that it is the junction point of two pairs of symmetric local gray areas,and this feature can be used to build the matching template.The response point with higher matching degree is used as the candidate corner point.The threshold method is used to process the non-corner points,and the non-maximum value suppression and gradient statistics are used to screen the target corner point.Experimental results show that compared with the corner point detection methods which optimize the corner point set by checkerboard texture and geometric features,the proposed method can improve the detection accuracy,shorten the process of detection,and effectively detect the checkboard images and distorted images in complex environments.
  • WANG Jinhe, SU Cuili, MENG Fanyun, CHE Zhilong, TAN Hao, ZHANG Nan
    Computer Engineering. 2020, 46(7): 228-234,242. https://doi.org/10.19678/j.issn.1000-3428.0055428
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Convolutional Neural Network(CNN) is often used in image processing algorithms because of its excellent representation capabilities,but the process is time-consuming and often results in information loss.To address the problem,this paper proposes a CNN structure based on Asymmetric Spatial Pyramid Pooling(ASPP) model.An ASPP method is designed to be integrated with the stereo matching network to obtain more specific information about image features.Then convolutional layers with a 3×3 convolution kernel are superposed on those with a 1×1 convolutional kernel for multi-scale information fusion and improvement of network convergence speed.Also,the number of network layers is increased from four layers to seven layers to improve the matching accuracy.The parallax prediction is performed on the KITTI and Middlebury data sets.Experimental results show that,compared with the benchmark network,the proposed network structure shortens the convergence time by about 50.1% and reduces the matching error rate from 6.65% to 4.78%,achieving a smoother parallax effect in stereo matching.
  • ZHAO Xin, HOU Guojia, PAN Zhenkuan, LI Jingming, WANG Guodong
    Computer Engineering. 2020, 46(7): 235-242. https://doi.org/10.19678/j.issn.1000-3428.0055324
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Despite the high accuracy of existing underwater image quality assessment methods,they have weak correlation with subjective assessment of human beings,so it is difficult to achieve high-quality evaluation.To address the problem,this paper proposes an Underwater Image Quality Assessment(UIQA) method based on the human vision system.The method makes linear combinations of color saturation based on CIELab surface color system,brightness contrast based on dark channel theory,and image sharpness,so as to evaluate the underwater image quality in different scenes.The evaluation results are compared with those obtained by the UIQM method and UCIQE method.Experimental results show that compared with the quality assessment results of underwater images obtained by UIQM method and UCIQE method,those obtained by the UIQA method are highly consistent with the subjective evaluation results and have higher correlation with subjective evaluation of human beings.
  • SUO Jing, SONG Linlin, LI Qiang
    Computer Engineering. 2020, 46(7): 243-250,259. https://doi.org/10.19678/j.issn.1000-3428.0055065
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of existing classification methods for image sets are costly,having high computational complexity and poor timeliness.To address the problem,this paper proposes an improved image reconstruction and recognition algorithm.The algorithm uses the Linear Regression Classification(LRC) and Share Nearest Neighbor(SNN) subspace classification theory for image reconstruction and classification.The high-dimensional space built by image subsampling is taken as subspace to avoid the training process with high computational complexity.Then,subspace of different categories of image sets is used to implement regression model estimation for test images.For images in the test set of regression model reconstruction,their categories are determined by using the weighted voting strategy to estimate the test set under the principle that the errors between reconstructed images and original images should be minimized.Experimental results on UCSD/Honda,CMU,ETH-8 and YouTube datasets show that under low-resolution sampling conditions,compared with the ADNT algorithm,the proposed algorithm increases the average classification accuracy by 3.6%,computational efficiency by 10 times,and shortens the fastest response time to 2.8 ms.
  • XIONG Yahui, CHEN Dongfang, WANG Xiaofeng
    Computer Engineering. 2020, 46(7): 251-259. https://doi.org/10.19678/j.issn.1000-3428.0055551
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the problem that existing mainstream super-resolution image reconstruction algorithms fail to fully utilize the detailed information in Low-Resolution(LR) images,this paper proposes a super-resolution image reconstruction algorithm based on multi-scale back projection.The algorithm uses multiple convolutional kernels of different scales to extract feature information of different dimensions from the shallow feature extraction layer.Then the extracted feature information is input into the back projection module,and the upsampling and downsampling methods are used alternatively to optimize the projection error of High-Resolution(HR) and LR images.Also,the idea of residual learning is used to connect the features extracted in the upsampling and downsampling stages in a cascade manner,so as to improve the image reconstruction effect.Experimental results on the Set5,Set14 and Urban100 datasets show that the proposed algorithm improves the Peak Signal-to-Noise Ratio(PSNR) and Structural Similarity(SSIM) compared with the five mainstream algorithms such as Bicubic,SRCNN,ESPCN,VDSR and LapSRN.
  • GU Yan, ZHAO Chongyu, HUANG Ping
    Computer Engineering. 2020, 46(7): 260-267,276. https://doi.org/10.19678/j.issn.1000-3428.0055259
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Deep hashing has been widely used in large-scale image retrieval for its advantage in retrieval efficiency and storage cost.To enhance the identification ability of hash code and improve the retrieval accuracy and efficiency,this paper proposes a deep hash learning model,BCI-DHH,based on high-order statistical information.The improved VGG-m model is employed to extract intra-layer auto-correlation features and inter-layer cross-correlation features from input images,and to generate a normalized high-order statistical vector.Then the weighting parameters are introduced to balance the number of positive and negative samples,and on this basis a contrastive loss function based on data balance is proposed.Then multi-level index hash blocks corresponding to dissimilar image pairs are differentiated to increase the hamming distance between the dissimilar image and the to-be-retrieved image,so as to optimize the compatibility of multi-level hash index.Experimental results on the benchmark datasets demonstrate that the proposed model outperforms BDH,DSH and other methods in terms of retrieval accuracy and efficiency.
  • LI Wenbin, HE Ran
    Computer Engineering. 2020, 46(7): 268-276. https://doi.org/10.19678/j.issn.1000-3428.0057070
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The airplane target detection of remote sensing images is frequently faced with problems including complex background and great changes of target scales.To address the problems,this paper proposes a model DC-DNN based on deep neural networks for aircraft detection in remote sensing images.The bottom layer features of images are used to make pixel-level labels for the training of Fully Convolutional Neural Network(FCN).The FCN model and DBSCAN algorithm are combined to select the self-adaptive candidate regions of the aircraft target,and the high-level features of the candidate region are extracted based on VGG-16 net to obtain the detection frame of the aircraft target.Also,a new detection frame suppression algorithm is proposed to eliminate overlapping frames and false detection frames to obtain the final detection result of the aircraft target.Experimental results show that the proposed DC-DNN model has the accuracy of aircraft target detection in remote sensing images reaching 95.78%,recall reaching 98.98%,and F1 score reaching 0.973 5,and it has better detection performance and generalization capabilities than WS-DNN,R-FCN and other models.
  • Development Research and Engineering Application
  • MA Jinlin, CHEN Deguang, MA Ziping, WEI Lin
    Computer Engineering. 2020, 46(7): 277-285. https://doi.org/10.19678/j.issn.1000-3428.0055178
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    AlexNet does not perform well in the multi-target classification of verification codes due to the large number of parameters and heavy floating point computation.To address the problem,this paper proposes a CAPTCHA recognition method optimized by Petri net.The method uses the Petri net theory to model AlexNet and DenseNet-BC,and optimizes the network structure and parameters with the built models.At the same time,according to the relationship between the number of model parameters and the amount of floating point computation,the concept of hyperactivity is proposed.Then sensitivity analysis is carried out on Petri-ANPP-net,Petri-ANPS-net,and Petri-DNBC-net models.Experimental results show that after the Petri net-based optimization,the highest accuracy of the Petri-ANPP-net model is 60.40%,and its super activity as well as the model sensitivity is poor.The highest accuracy of the Petri-ANPS-net model is 97.50%,but its superactivity and the model sensitivity is poor.The highest accuracy of the Petri-DNBC-net model is 99.24%,and its superactivity as well as the model sensitivity is high.The results show that Petri net can optimize the network model structure and parameters to a certain extent,and the hyperactivity has certain advantages in evaluating the sensitivity of the model.
  • HE Zhuoheng, LIU Zhiyong, LI Lu, LI Changming, ZHANG Lin
    Computer Engineering. 2020, 46(7): 286-293,299. https://doi.org/10.19678/j.issn.1000-3428.0054925
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper compares and studies the DOM,SAX,JDOM,DOM4J methods for parsing XML texts in heterogeneous text data conversion.The pros and cons of the four parsing methods are judged based on parsing time,memory heap space,and CPU occupancy rate.The advantage of this evaluation method is that when the amount of data or data attributes change,the impact of the four analytical methods on the evaluation results still has a good degree of discrimination.By comparing 10 converted XML datasets of heterogeneous text data of Web log,experimental results show that when the amount of data increases and the analysis time is mainly concerned,the DOM4J parsing method is superior to the other three analysis methods.When space occupation is mainly concerned,the SAX parsing method is superior to the other three analysis methods.
  • WU Zhengyue, ZHANG Chao, LIN Yan
    Computer Engineering. 2020, 46(7): 294-299. https://doi.org/10.19678/j.issn.1000-3428.0055565
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    RBPF-based laser SLAM algorithms suffer from sample dilution and inaccurate laser measurement models in the resampling process.To address the problem,this paper proposes an optimized laser SLAM algorithm.In order to alleviate the sample dilution in resampling,Minimum Sampling Variance(MSV) resampling method is used to improve the original resampling method to keep the diversity of the resampled particles.Then the likelihood field model and the probability of unexpected objects are combined to make the laser measurement model better reflect the real environment.Simulation results show that the improved resampling method has excellent performance in positioning,and outperforms the original laser SLAM algorithms in terms of the accuracy of mapping and positioning in dynamic environment.
  • WANG Guilin, XU Yong
    Computer Engineering. 2020, 46(7): 300-305. https://doi.org/10.19678/j.issn.1000-3428.0054594
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the stabilization problem of evolutionary congestion game with time delay,this paper proposes a stabilization method based on semi-tensor product for evolutionary congestion game with time delay.The semi-tensor product method of matrix is used to describe the evolutionary congestion game with time delay as a logical dynamic system,and an equivalent algebraic form is given.On this basis,the dynamic behavior of evolutionary congestion game with time delay is analyzed,and it is proved that the fixed point of the game is the Nash equilibrium point.The necessary and sufficient conditions for the game to evolve from global stabilization to Nash equilibrium under open-loop control and state feedback control are presented,as well as the design process of control.Analysis results show that the dynamic system of evolutionary congestion game with time delay can evolve from global stabilization to Nash equilibrium under open-loop control and state feedback control,which shows the effectiveness of the method.
  • ZHANG Chiming, WANG Qingfeng, LIU Zhiqin, HUANG Jun, CHEN Bo, FU Jie, ZHOU Ying
    Computer Engineering. 2020, 46(7): 306-311,320. https://doi.org/10.19678/j.issn.1000-3428.0055204
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Chest X-ray is commonly used in the examination of multiple types of frequently occurring chest diseases.However,there is high difference and diversity of chest diseases in pathological morphology,size and location,and the ratio of disease samples is imbalanced.So it is challenging to detect and locate chest diseases by deep learning.To address the above problems,a diagnostic algorithm for chest diseases is proposed.Firstly,the adaptive feature recalibration is implemented through the squeeze-excitation module to improve the fine-grained classification ability of the network.Secondly,the spatial mapping ability of the pathological features of the network is enhanced by the global max-average pooling layer.Then the focus loss function is used to reduce the weight of easily classified samples,so that the model can focus more on the learning of easily misclassified samples in training.Finally,the visualized location of weakly supervised lesion areas is implemented through the Gradient-weighted Class Activation Mapping(GCAM),providing corresponding visual interpretation of network prediction results.Training and evaluation results on the official data division criteria of ChestX-Ray14 show that the proposed algorithm has excellent performance in the diagnosis of 14 frequently occurring chest diseases with an average AUC of 0.83.
  • XIA Yuantian, ZHOU Juxiang, XU Tianwei
    Computer Engineering. 2020, 46(7): 312-320. https://doi.org/10.19678/j.issn.1000-3428.0055513
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the influence of time-varying unknown control direction and time-varying delay on the nonlinear time-varying delay system with time-varying unknown control direction,this paper proposes a control method for adaptive iterative learning.The method uses the local Lipschitz continuity condition to introduce the differential and difference coupling parameters to update the law.Then based on the Nussbaum gain technique and the idea of signal replacement and recombination,a new control scheme is obtained.The convergence and divergence of the tracking error,and the boundedness of each signal in the system are theoretically proven.Simulation experiments demonstrate the correctness of the theoretical derivation,and show the excellent timeliness of the system.