Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 October 2020, Volume 46 Issue 10
    

  • Select all
    |
    Hot Topics and Reviews
  • JING Zhuangwei, GUAN Haiyan, PENG Daifeng, YU Yongtao
    Computer Engineering. 2020, 46(10): 1-17. https://doi.org/10.19678/j.issn.1000-3428.0058018
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the rapid development of deep learning and its widespread applications in semantic segmentation,the quality of semantic segmentation has been significantly improved.This paper reviews and analyzes the mainstream deep neural network-based methods in semantic image segmentation.According to the ways of network training,the existing semantic image segmentation methods are categorized into fully supervised learning-based methods and weakly supervised learning-based methods.The performance,advantages and disadvantages of the representative algorithms of these two categories of semantic image segmentation methods are compared and analyzed.Then the paper systematically details the contributions of deep neural network to semantic segmentation.On this basis,the paper summarizes the current mainstream public datasets and remote sensing datasets,compares the segmentation performance of mainstream semantic image segmentation methods.Finally,the paper discusses the challenges faced with existing semantic segmentation techniques and the future development trends.
  • BAO Yuhan, FU Yinjin, CHEN Weiwei
    Computer Engineering. 2020, 46(10): 18-32,40. https://doi.org/10.19678/j.issn.1000-3428.0058345
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional single-cloud storage leads to data security and service flexibility problems,such as data privacy leakage and difficulty in meeting the needs of online real-time applications.Multi-cloud storage technology uses virtualization to integrate the online storage services of multiple cloud providers,so as to realize functions such as unified storage,performance tuning,data security and privacy protection,maximizing the value of cloud storage resources.This paper introduces the concept of multi-cloud storage technology,as well as its system architecture and advantages.Then the paper describes the main technical challenges of multi-cloud storage technology in data availability,integrity,consistency and security.And it focuses on analyzing and summarizing the research status of key techniques of multi-cloud storage,including erasure code-based fault tolerance,proof mechanism of data integrity,concurrency control method and secure deduplication technique.Finally,according to the weaknesses of the current research works on multi-cloud storage technology,this paper analyzes several potential further research directions.
  • YANG Tian, TIAN Lin, SUN Qian, ZHANG Zongshuai, WANG Yuanyuan
    Computer Engineering. 2020, 46(10): 33-40. https://doi.org/10.19678/j.issn.1000-3428.0056981
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of the existing computing offloading schemes in Mobile Edge Computing(MEC) pre-set unified weight factors,which fail to meet the different requirements of users for delay and energy consumption.To address the problem,this paper proposes a computing offloading scheme based on user experience.The scheme defines the computing offloading problem as a utility maximization problem,and the user utility is represented by the weighted sum of task execution delay and the gain rate of energy consumption.Meanwhile,the scheme considers the battery life of the user’s device,and constructs the adaptive weight factor based on the user demand.On this basis,the original optimization problem is divided into two sub-problems of resource allocation and offloading decision,which are solved respectively to obtain the final computing offloading strategy.Simulation results show that compared with the offloading schemes with fixed weighting factors,this proposed scheme can meet different requirements of users and effectively improve the user experience.
  • OUYANG Hengyi, XIONG Yan, HUANG Wenchao
    Computer Engineering. 2020, 46(10): 41-45,51. https://doi.org/10.19678/j.issn.1000-3428.0056812
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the frequent security incidents of token smart contracts,this paper proposes a formal modeling and verification method based on integer overflow vulnerabilities of token smart contracts.The DAO and BEC vulnerability attacks are analyzed,and on this basis the security attributes of token smart contracts are defined.Then the method introduces global variables,numeric comparison and other constraints to extend the modeling language of token smart contracts,so as to enable it to support the formal representation of various statements of smart contracts.Finally,the idea of mathematical induction is used to optimize the model verification procedure of the SmartVerif tool in order to avoid infinite traversal of state space.Experimental results show that the proposed method can successfully find out the integer overflow vulnerabilities of token smart contracts,and has strong versatility.
  • MAO Xiangjie, ZHANG Pin
    Computer Engineering. 2020, 46(10): 46-51. https://doi.org/10.19678/j.issn.1000-3428.0056404
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of existing verification schemes for cloud data integrity use a single verification method,but the diversity of user data makes it difficult to meet all user requirements.To solve the problem,this paper proposes a hybrid verification scheme for cloud data integrity.The scheme adopts different audit methods for dynamic data and static data,using BLS signature for efficient static verification and Large Branching Tree(LBT) for dynamic verification,so as to meet different kinds of data integrity verification requirements.Performance analysis and experimental results show that the proposed scheme can reduce the overall computational overhead and communication cost of the system,and improve the verification efficiency.
  • Artificial Intelligence and Pattern Recognition
  • NIU Yaoqiang, MENG Yuyu, NIU Quanfu
    Computer Engineering. 2020, 46(10): 52-59. https://doi.org/10.19678/j.issn.1000-3428.0055861
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the inaccurate text recommendation in the big data environment,this paper merges two kinds of heterogeneous data,text data and relational network,and introduces the encoder-decoder framework.On this basis,a Recurrent Neural Network(RNN) model based on heterogeneous attention is proposed for short-term text recommendation.The sentence-level Distributed Memory Model of Paragraph Vectors(PV-DM) and the representation method for entity relations,TransR,are used to embed text data and relational network into high-dimensional vectors as the input of the model.In the encoding stage,the short-term interests of users are introduced into the recommendation model by using bidirectional GRU,and the attention mechanism is used to connect with the decoder,so that the decoder can dynamically select and linearly combine different parts of the input sequence of the encoder in order to build short-term interests of users.In the decoder stage,the attention output of the encoder,the candidate items,and the representation of current users are taken as inputs.The score of each candidate item is calculated with the bidirectional GRU and the feedforward network layer to obtain the recommendation result.Experimental results show that compared with TF-IDF,ItemKNN and other models,the proposed model significantly improves the recall rate and the average precision of the mean.
  • ZHOU Weixiang, ZHANG Wen, YANG Bo, LIU Yi, ZHANG Lin, ZHANG Yangsen
    Computer Engineering. 2020, 46(10): 60-66,73. https://doi.org/10.19678/j.issn.1000-3428.0055979
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Personalized recommendation of microblog is crucial to improving user experience and helping users obtain information accurately in time.Based on the analysis of behavior patterns of microblog users,this paper proposes a personalized recommendation model for the microblog based on scenario modeling and Convolutional Neural Network(CNN).Scenario modeling is implemented for users from the dimensions of time and region,so as to extract the user’s temporal scenario pattern and geographical scenario pattern.Then a calculation method of scenario pattern similarity is provided to extend the scenario patterns of users,capturing the scenario pattern tendency that users are interested in.On this basis,a personalized scenario mode library of the user is established,and the CNN is used to construct a personalized recommendation model for microblog users.Experimental results on real data of the microblog show that compared with the ILCAUSR and RA-CD algorithms,the proposed model has better recommendation performance,and achieves the optimal effect in Mean Absolute Error(MAE) and Average User Satisfaction(AUS) indexes compared with the temporal scenario model and geographical scenario model.
  • ZHANG Pan, LU Guangyue, Lü Shaoqing, ZHAO Xueli
    Computer Engineering. 2020, 46(10): 67-73. https://doi.org/10.19678/j.issn.1000-3428.0055764
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To combine the information of network topological structure and node attribute to improve the quality of network representation learning,this paper proposes a new attributed network representation learning algorithm,named ANEMF.The algorithm introduces the idea of cosine similarity to define the second-order structural similarity matrix and the attribute similarity matrix of the network.Through the cooperative optimized learning of network structure similarity and attribute similarity functions,the information of network topological structure and node attribute is fused in the form of matrix factorization.Finally,the node representation vectors are obtained through the multiplication update rules.Experimental results on three public datasets show that compared with DeepWalk and TADW algorithms,the proposed algorithm can keep the information of network topological structure and node attribute in obtained node representation vectors.It can significantly improve the overall performance in the node classification tasks.
  • Lü Guoying, WU Yujuan, LI Ru, ZHANG Yueping, GUAN Yong, GUO Shaoru
    Computer Engineering. 2020, 46(10): 74-80,87. https://doi.org/10.19678/j.issn.1000-3428.0055582
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Reasoning based on frame semantics is an effective means to achieve semantic understanding in tasks such as discourse comprehension and QA system.It finds the inferential path by constructing connections between the frames of sentences in Chinese texts,but the coreference of the internal representations of the frame elements hinders the establishment of connections between the frames.To address the problem,this paper proposes a coreference resolution method based on frame features,which integrates the Chinese frame semantic information,and uses different classification algorithms to achieve coreference resolution.Experimental results on the corpus of frame semantics discourses show that the application of Chinese frame features to classifiers can improve the results of coreference resolution,and the classification performance of support vector machine is better than that of naive Bayesian,decision tree and other classifiers.
  • BI Meng, SHAO Zhong, XU Jian
    Computer Engineering. 2020, 46(10): 81-87. https://doi.org/10.19678/j.issn.1000-3428.0058973
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing user behavior clustering methods require the determined size of user behavior data,and the generated cluster labels lack explicit semantics.To solve these problems,this paper proposes an automatic cluster label generation method for clustering analysis of network user behavior.The method applies the Latent Factor Model(LFM) and matrix decomposition method to the raw data of network user behavior for missing value processing.Based on the attribute features of user behavior data,the user behavior cluster is performed and behavior features are added during clustering.At the same time,cluster labels are generated based on behavior feature information to improve the accuracy of user behavior clustering.Experimental results on datasets of Last.fm,Movielens and CiteULike show that the proposed method does not require the determined size of user behavior data,and can automatically generate cluster labels with more explicit semantics while keeping a high clustering accuracy.
  • WANG Yan, WANG Congying, SHEN Yanmei
    Computer Engineering. 2020, 46(10): 88-94,102. https://doi.org/10.19678/j.issn.1000-3428.0055939
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The collaborative filtering algorithm is widely used in the field of recommendation because of its good recommendation effect,which nonetheless is significantly reduced when the data is sparse and from a cold start.In this case,in order to make full use of the user’s historical information to improve the recommendation precision,this paper proposes an improved clustering joint similarity recommendation algorithm.The center point of K-means++clustering is improved by using the bee colony algorithm,so that the cluster center in the whole data is optimal,and the clustering results are integrated to further optimize the clustering.According to the clustering results,the improved user similarity algorithm is used to optimize the traditional similarity algorithm in the same class,so that the similarity between users is optimal.Then the optimal results are recommended to users according to the score prediction method in the field.Experimental results show that the proposed algorithm outperforms other existing algorithms in terms of the precision,recall rate and Mean Absolute Error(MAE),and its performance is still the best in the case of sparse data.
  • ZHANG Jinfeng, SHI Chaoxia, WANG Yanqing
    Computer Engineering. 2020, 46(10): 95-102. https://doi.org/10.19678/j.issn.1000-3428.0056013
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As a research hotspot in the field of robotics,Simultaneous Localization and Mapping(SLAM) has made great progress in recent years,but few SLAM methods take dynamic or movable targets in the application scene into account.To handle the problem,this paper proposes a SLAM method which introduces the deep learning-based object detection algorithm into the classic ORB_SLAM2 method to make it more suitable for dynamic scene.The feature points are divided into dynamic feature points and potential dynamic feature points.The motion model is calculated based on dynamic feature points,which is used to select the static feature points in the application scene for pose tracking,and select static feature points in the dynamic feature points for map construction.Experimental results on KITTI and TUM datasets show that compared with the ORB_SLAM2 system,the proposed method improves the tracking accuracy and the application performance of the map.
  • MENG Lei, YE Zhonglin, ZHAO Haixing, YANG Yanlin
    Computer Engineering. 2020, 46(10): 103-111. https://doi.org/10.19678/j.issn.1000-3428.0055984
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In complex network modeling,the hypernetwork model can derive from the preferential connection mechanism,which is the most commonly used node connection mechanism.At present,researches relevant to the hypernetwork model mainly focus on its growth and evolution,and pay less attention to its preferential connection modes.This paper studies the preferential connections in the evolution of the hypernetwork model,and realizes the preferential connections based on the roulette method and the linked list method to construct a hypernetwork evolution model.The characteristics of the constructed uniform hypernetwork and random hypernetwork are analyzed,and the variation laws of the power-law distribution slope of hyper-degrees are studied by adjusting the number of selected old nodes,adding new nodes,and increasing the network scale in construction of hypernetwork.Experimental results show that the roulette method takes much longer to construct a hypernetwork model than the linked list method does.
  • Cyberspace Security
  • ZHOU Neng, ZHANG Minqing, LIN Wenbing
    Computer Engineering. 2020, 46(10): 112-119. https://doi.org/10.19678/j.issn.1000-3428.0056281
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the embedding capacity of reversible information hiding in encrypted domain,this paper proposes a separable reversible information hiding algorithm in encrypted domain based on secret sharing.Firstly,the original image is segmented according to the bit-planes.Then,the data is embedded into the low bit-planes of the encrypted data by using the Difference Expansion(DE) algorithm,and embedded into the high bit-planes by using homomorphic addition.The receiver decrypts the low bit-planes and high bit-planes respectively to obtain a decrypted image similar to the original image.Also,the receiver can extract data directly in the low bit-planes of the encrypted data,and extract data after the high bit-planes are decrypted,so as to realize the reversible recovery of the original image.Simulation results show that the proposed algorithm has a higher Peak Signal to Noise Ratio(PSNR) than the existing separable algorithms,and its average embedding rate reaches 0.3 BPP.
  • DUAN Jing, Lü Xin, LIU Fan
    Computer Engineering. 2020, 46(10): 120-130,136. https://doi.org/10.19678/j.issn.1000-3428.0057993
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Consortium chain is currently the preferredblockchain solution for governments and enterprises to build industry applications,but its core consensus mechanism,Practically Byzantine Fault Tolerance(PBFT),has scalability problems.Using the sharding technology and agent node can effectively reduce the complexity of consensus messages,and relevant researches focus on the agent’s mechanisms of the election mode,improvement and intervention protocol process.Therefore,this paper proposes a hierarchical consensus optimization mechanism,TDH-PBFT,based on trust delegation.The mechanism divides the consensus nodes into independent groups,and the behaviors of the consensus process among the nodes within the group are evaluated to obtain the trust degree of nodes.Based on the trust degree,the entrusted agent is elected to participate in the local and global consensus.The completeness of TDH-PBFT is proved theoretically.Experimental results show that when the number of nodes increases,the proposed algorithm can effectively reduce consensus time,increase system throughput,and ensure consensus service quality.
  • GE Binghui, ZHAO Zongqu, HE Zheng, QIN Panke
    Computer Engineering. 2020, 46(10): 131-136. https://doi.org/10.19678/j.issn.1000-3428.0056114
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the problem that the length of signature and key is too long in traditional ring signature schemes on lattices,this paper proposes an improved ring signature model of Programmable Hash Function(PHF) on lattices.The MP12 trapdoor function is used to generate the signature key.The PHF is used to simulate part of the programmable properties of the random oracle machine.The partition proof method on lattices is used for the construction of the ring signature scheme to obtain the verification key and signature.Analysis results show that compared with other lattice-based ring signature schemes using random matrix and G matrix,the proposed scheme reduces the length of the signature,verification key and signature key,and can meet Existential Unforgeability against Adaptive Chosen Messages Attack(EUF-CMA)security requirements in the standard model.
  • NIU Shufen, YANG Pingping, XIE Yaya, WANG Caifen, DU Xiaoni
    Computer Engineering. 2020, 46(10): 137-142,150. https://doi.org/10.19678/j.issn.1000-3428.0055654
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The existing identity-based keyword search encryption schemes for designated server cannot satisfy the indistinguishability of keyword ciphertext.To meet the higher security requirements of email systems,this paper proposes an identity authentication-based keyword search encryption scheme for designated mail server.The scheme can resist off-line keyword guessing attacks by encrypting the identity of the designated mail storage server and data receiver.In the random oracle model,the following security features of the scheme such as the indistinguishability of keyword ciphertext in adaptively chosen message attacks,indistinguishability of trapdoor and security of offline guessing attacks are verified.Results of theoretical analysis and numerical experiments show that the proposed scheme has higher computational efficiency in keyword encryption and verification than dIBEKS scheme.
  • ZHANG Jun, ZHANG Ankang, WANG Hui
    Computer Engineering. 2020, 46(10): 143-150. https://doi.org/10.19678/j.issn.1000-3428.0056289
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to reduce network security risks and better realize the optimization of network attack paths,this paper constructs a SQAG model for network attacks based on the existing network attack graphs.The model discretizes the attack process,in which the attack graph at each moment contains the nodes occupied by the attacker at that time.The attack entropy optimization algorithm is used to implement cost-benefit analysis of sub-attack paths,so as to reasonably eliminate redundant paths.Through reasonable deduction of the attack process,the joint tree algorithm that performs precise reasoning is applied to the sequential network attack graph to obtain the node confidence degree of the attack graph at any moment in real time.Experimental results show that when the firewall tightens the access scale,the confidence degree of each node in the proposed model decreases with time in the attack process.The redundant paths are eliminated by using the attack entropy optimization algorithm to obtain a more accurate confidence degree of nodes.
  • ZHAO Chao, PAN Zulie, FAN Jing
    Computer Engineering. 2020, 46(10): 151-158. https://doi.org/10.19678/j.issn.1000-3428.0055750
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The existing automatic detection systems for software vulnerabilities fail to automatically detect the programs with heap overflow vulnerabilities.To address the problem,this paper proposes an automatic detection method for heap overflow fastbin attacks on Linux platforms.Based on the fastbin attack examples,the characteristics of fastbin attacks are used to establish a detection model for fastbin attacks,and on this basis a detection method of fastbin attacks is proposed.The method uses the technique of stain analysis and symbolic execution to monitor the key information of symbol data reaching the vulnerability trigger point,and on this basis constructs path constraints and data constraints that trigger fastbin attacks.Based on the solution of constraints,the possibility of fastbin attacks in the program can be judged and test cases can be generated.Experimental results show that the proposed heap overflow fastbin attack detection method can effectively detect fastbin attacks.
  • Mobile Internet and Communication Technology
  • BAO Xiang, LEI Lei, SHEN Gaoqing, LI Zhilin
    Computer Engineering. 2020, 46(10): 159-165. https://doi.org/10.19678/j.issn.1000-3428.0055335
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Orthogonal circular orbit constellation combines the advantages of polar orbit constellation and equatorial orbit constellation to achieve continuous global coverage,improving the coverage performance of pure polar orbit constellation at low and middle latitudes to some extent.However,the traditional design method of orthogonal circular orbit constellation is to divide the coverage of polar orbit constellation and equatorial orbit constellation by latitude lines,which is too rough to fully utilize geometric properties of two constellations.To solve this problem,according to the motion features and coverage features of the polar orbit constellation satellites,this paper analyzes the features of the coverage gaps generated in the areas that do not meet the continuous coverage requirements,and integrates the characteristics of the equatorial orbit constellation satellites to propose a design method of orthogonal circular orbit constellation based on geometric analysis.By using the analytical method,the parameters of the half-width angle for minimum ground coverage of the equatorial satellite is determined.The typical constellation scheme is given to compare with the traditional design method of orthogonal circular orbit constellation,and the effectiveness of the proposed method is demonstrated by the STK simulations.
  • WANG Shaobo, GUO Ying, SUI Ping, LI Hongguang, YANG Xin
    Computer Engineering. 2020, 46(10): 166-172,181. https://doi.org/10.19678/j.issn.1000-3428.0055293
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To implement blind source separation of frequency-hopping signals in synchronous networking under underdetermined conditions,this paper proposes a frequency-hopping signal separation method based on the parallel factor analysis model and subspace projection method.The method calculates the time-delay correlation matrix of the frequency-hopping signals to construct the third-order tensor,and thus the hybrid matrix estimation problem is transformed into the tensor Canonical-Polyadic(CP) decomposition problem.Meanwhile,the classic Alternating Least Square(ALS) algorithm for CP decomposition is improved.The direct trilinear decomposition method is used to roughly estimate the loading matrix,which then serves as the initial iterative matrix of ALS.During the iterations,the standard linear search is used to accelerate the convergence,and the hybrid matrix is estimated.On this basis,the subspace projection method is used to complete the blind source separation of the frequency-hopping signals,and the separation result is optimized by ruling out the discrete noises.Simulation results show that this method can effectively improve the estimation accuracy of mixed matrix and the recovery effect of source signal.
  • LI Chao, LI Bo, DING Hongwei, YANG Zhijun, LIU Qianlin
    Computer Engineering. 2020, 46(10): 173-181. https://doi.org/10.19678/j.issn.1000-3428.0055869
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In tactical data link system,the prioritization-based polling access protocol can ensure that information packets in the prioritized sites are sent in time,but the service switching still consumes time.In order to reduce the waiting time and queue length,this paper proposes a Prioritization-based Continuous service Access Control Protocol(PCACP) by using Field Programmable Gate Array(FPGA).The protocol adopts the exhaustive service for the prioritized sites and the limited service for the subordinate sites.The control center adopts the continuous service mode between the sites.The Markov chain and the probabilistic generating function are used to analyze the performance index of the model to obtain the accurate solution of each performance index.The model is simulated by Matlab,and the results show that the proposed protocol can ensure the information packets of the prioritized sites are sent in time,reduce the queue length of the information packets and increase the throughput of the system.
  • ZHU Guohui, LIU Xiuxia, ZHANG Yin
    Computer Engineering. 2020, 46(10): 182-187,192. https://doi.org/10.19678/j.issn.1000-3428.0055914
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the Virtual Network Embedding(VNE) problem in the case of multiple link failures in physical networks,this paper proposes a Survivable Virtual Network Embedding(SVNE) algorithm.This algorithm provides backup resources for physical links and uses a multiple path selection algorithm to create a backup route set.According to the objective function,the integer linear programming is solved.The path with the largest bandwidth resource balance is selected from the backup route set of the failed link to implement re-embedding for the virtual links that are affected by the link failure.Simulation results show that the proposed algorithm can effectively shorten the failure recovery delay,and improve the long-term average revenue-to-expense ratio and average failure recovery rate.
  • Lü Yaping, JIA Xiangdong, LU Yi, YE Peiwen
    Computer Engineering. 2020, 46(10): 188-192. https://doi.org/10.19678/j.issn.1000-3428.0055640
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the service quality of indoor wireless communication to meet user demand,the dowmlink power allocation of home base station based on Deep Q Learning (DQL) algorithm is carried out to maximize system throughput.In the system model of densely deploying home base stations in office areas,the physical location of home base stations is modeled as a Poisson point process,and mobile users are randomly distributed in each location.On this basis,a deep neural network with two hidden layers is constructed to optimize the nonlinearity of the network and improve its fitting ability.Simulation results show that DQL algorithm can effectively improve the system throughput and convergence speed compared with greedy algorithm and Q learning algorithm.
  • Computer Architecture and Software Technology
  • SU Haoxiang, DONG Zhenghong, YANG Fan, LIU Lihao
    Computer Engineering. 2020, 46(10): 193-200. https://doi.org/10.19678/j.issn.1000-3428.0058400
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing simulation analysis and visualization programs for satellite loads suffer from poor portability,difficulty in sharing simulation results and high hardware requirements for real-time calculation and rendering.To address the problems,this paper proposes a Cesium-based visual simulation analysis platform for satellite loads.The platform adopts the B/S architecture and WebGL technology to achieve the separation of data calculation and real-time rendering,greatly reducing the simulation hardware requirements.The information of simulation scenes is stored locally and in the cloud through the JSON file,which ensures that the simulation results are shared synchronously and enables visual display of simulation scenes through a browser at any terminal.Meanwhile,the four-dimensional transformation matrix is used for load coverage calculation,and its inverse operation is used for transit analysis,which greatly simplifies the calculation process of load simulation analysis.The simulation results show that the proposed platform can quickly generate accurate and realistic visual results of satellite loads.The errors between its transit analysis results and the STK results are within milliseconds.
  • LI Kang, ZHANG Lufei, ZHANG Xinwei, YU Gongjian, LIU Jiahang, WU Dong, CHAI Zhilei
    Computer Engineering. 2020, 46(10): 201-209. https://doi.org/10.19678/j.issn.1000-3428.0056430
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the low running speed and high power consumption of the NEST Spiking Neural Network(SNN) simulators in the brain-like computing systems,this paper proposes a NEST simulator based on Field Programmable Gate Array(FPGA) cluster for SNN.By improving the structure of NEST simulator,a pipeline parallel architecture of the Leaky Integrate and Fire(LIF) neuron calculation module is proposed,which realizes the design of the dual-core dual-thread and multi-node multi-process FPGA cluster.The experimental results of cortical visual simulation model show that the computational energy efficiency of the proposed FPGA-cluster-based NEST simulator is 43.93 times that of the Xeon E5-2620 and 23.54 times that of the ARM A9.The computational speed of the proposed simulator is 12.36 times that of the Xeon E5-2620 and 208 times that of the ARM A9.
  • WANG Yuxin, GAO Meifeng
    Computer Engineering. 2020, 46(10): 210-215. https://doi.org/10.19678/j.issn.1000-3428.0056426
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the high memory consumption of bsdiff algorithm when building new versions of firmware in the firmware update of embedded devices,this paper proposes an incremental update algorithm that saves memory.The improved patch file format of the bsdiff algorithm is used to avoid recording and calculating the address offset frequently in the application of the patch files.The parallel decompression process in the bsdiff algorithm is replaced by serial decompression,and the required auxiliary space is reduced by processing data in batches.At the same time,the asymmetric lossless compression algorithm is applied to the compression and decompression process of the improved incremental update algorithm,which reduces the memory consumption caused by the decompression of patch files.Experimental results show that,compared with bsdiff algorithm,xdelta algorithm,vcdif algorithm and zdelta algorithm,the proposed algorithm can effectively reduce the memory consumption when building new versions of firmware,and has good compression performance.
  • WANG Shuyan, ZHANG Yiquan, SUN Jiaze
    Computer Engineering. 2020, 46(10): 216-222,230. https://doi.org/10.19678/j.issn.1000-3428.0055862
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Bad smells in code seriously affect the quality of software and its maintenance.To address the low accuracy of machine learning algorithms in bad smell detection and the single type of bad smell dataset,this paper proposes a detection method for bad smells in code based on BP Neural Network(BPNN).Considering that there are different types of bad smells in the actual development of software,four types of bad smells,Data class,God class,Long method,and Feature envy,are studied and merged into method-level and class-level code smell datasets.Based on the label information in the dataset,supervised deep learning is implemented to build a true and false positive prediction model for bad smells.The experimental results show that compared with the bad smell detection methods based on machine learning and metric,the proposed method improves the average accuracy by 15.19% and the average F1 value by 58.39%.
  • LIU Jiamei, XU Qiaozhi
    Computer Engineering. 2020, 46(10): 223-230. https://doi.org/10.19678/j.issn.1000-3428.0056436
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address uneven loads in the control plane of Software Defined Network(SDN) architecture caused by the complexity and variability of network traffic,this paper proposes a network traffic prediction and controller pre-deployment PPME model based on hidden Markov optimization with maximum entropy.The model classifies SDN traffic according to the protocol types,and uses the maximum entropy algorithm to predict the distribution of the future data stream based on the captured historical data stream,so as to generate the pre-deployment scheme of various controllers in the control plane.The timeliness of the prediction scheme is optimized by the introduced hidden Markov chain.Experimental results show that compared with SVR model and GBRT model,the proposed model has higher prediction accuracy,and its generated pre-deployment scheme can adapt to the dynamic changes in complex SDN environment.It reduces the load imbalance and controller migration caused by emergencies,and thus reduces the network delay and response time caused by controller migration.
  • ZHAO Yu, WU Chengrong, YAN Ming
    Computer Engineering. 2020, 46(10): 231-239,247. https://doi.org/10.19678/j.issn.1000-3428.0055763
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The mainstream methods for background traffic generation include the introduction of actual Internet traffic and the traffic generation using network test equipment,but these methods are less concerned with the temporal and spatial distribution of traffic and the detailed distribution of business traffic,so they fail to implement in-depth simulation of traffic.To this end,this paper proposes a business background traffic generation system based on LoadRunner for Web business systems.The system adopts methods such as scalable scripting mechanism,container-based distributed traffic generator array deployment method,address modification mechanism,spatial distribution control,inspection and feedback mechanism of ON/OFF model traffic,so as to implement the generation of large-scale realistic background traffic.At the same time,the proposed system realizes content heterogeneity,on-demand expansion,temporal and spatial probability distribution control and adaptive adjustment and other features.Verification tests are carried out by taking a forum website as a to-be-tested example business system,and the results show that the traffic generated by the example tests basically meets the temporal and spatial characteristics of the background traffic,and accords with the pre-settings in other characteristics,which demonstrates the feasibility of the system design.
  • LI Wei, LIANG Jun, ZHANG Zhen, LI Qing
    Computer Engineering. 2020, 46(10): 240-247. https://doi.org/10.19678/j.issn.1000-3428.0056306
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the rapid development of drone technology,airborne Synthetic Aperture Radar(SAR) has become the main remote sensing solution for cloudy and hilly areas due to its high resolution,high maneuverability,and low cost.However,the computing resources of airborne SAR are limited and its analysis process is time-consuming,which reduces the responsiveness of drones to external environment.Therefore,this paper describes the implementation of an OpenCL-based parallel optimization strategy on the ARM Mali-T860 GPU architecture for the multi-view processing,rotation scaling and image quantization algorithms in airborne SAR imaging.The optimization strategy is designed to simplify calculations,optimize memory access,and reduce conditional branches.Experimental results show that compared with the CPU-based SAR imaging algorithms,the performance of optimized multi-view processing,rotation scaling and image quantization algorithms is 17~62 times,48~74 times,and 31~33 times respectively what it was,and the optimized algorithms can be used for cross-platform applications.
  • Graphics and Image Processing
  • MA Longxiang, YANG Hao, SONG Tingting, ZHAI Pengbo, YU Kang
    Computer Engineering. 2020, 46(10): 248-252. https://doi.org/10.19678/j.issn.1000-3428.0056685
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The accurate segmentation of tongue images is of great significance for tongue recognition and classification in tongue diagnosis.The traditional image processing and deep learning methods will lose part of the edge information of tongue images,thus reducing the accuracy of tongue recognition.To solve the problem,this paper proposes a tongue image segmentation algorithm based on high-resolution network.The region location network is used to identify the tongue and extract the features of the original image of the tongue to generate suggestion boxes,which are classified and processed with regression to locate the tongue area.At the same time,a high-resolution network is constructed to extract the high-resolution features of the region,and finally complete the tongue image segmentation.Experimental results show that the proposed algorithm can effectively preserve the edge information of tongue images,and the mean Intersection over Union(mIoU) of segmentation results is 98.2%,which is more accurate than that of SegNet and Mask-RCNN algorithms.
  • WANG Hongru, ZHANG Gong, LU Daohua, WANG Jia
    Computer Engineering. 2020, 46(10): 253-258. https://doi.org/10.19678/j.issn.1000-3428.0055876
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To solve the problem of image degradation and color attenuation in underwater imaging,this paper proposes an image enhancement algorithm based on global background light estimation and color correction.The similarity between fog image and underwater image is used to improve the algorithm of fog removal in air.When estimating the global background light of the image,the rectangular template is selected to calculate the color saturation variance in image blocks,and the region with the minimum variance is selected as the estimated image of background light.To address the problem that the original background light estimation algorithm will make the image whiter than it should be,the minimum filtering is implemented.Also,the Retinex algorithm is used to correct the color of R channel of the image and then other channel graphs are obtained by combining the color attenuation coefficient ratio of each color channel.Experimental results show that this algorithm can effectively remove the turbidity of underwater images improve the color deviation of images,and significantly improve the clarity of images.
  • TAO Qian, XIONG Fengguang, LIU Tao, KUANG Liqun, HAN Xie, LIANG Zhenbin, CHANG Min
    Computer Engineering. 2020, 46(10): 259-265,274. https://doi.org/10.19678/j.issn.1000-3428.0055896
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to effectively register the data in laser scanner and digital camera,this paper proposes an automatic registration method for multiple point cloud data and texture sequences based on central projection.The method registers pre-processed multiple local point cloud data to form complete point cloud data,which is later used to generate intensity images by using central projection.Then the matching relationships between the texture images and intensity images are obtained by feature matching and optimized by the RANSAC algorithm, so as to determine the transformation relationship between each texture image and the corresponding intensity image.On this basis,the texture image sequence is pre-processed by fusion,and the registration between multiple point cloud data and multiple texture images is implemented by using collinear equation to form the final point cloud data with RGB colors.Experimental results show that the proposed method can reduce the difference between two kinds of heterogeneous data,and achieve better registration performance with improved execution efficiency.
  • CAO Weidong, XIE Cui, HAN Bing, DONG Junyu
    Computer Engineering. 2020, 46(10): 266-274. https://doi.org/10.19678/j.issn.1000-3428.0055985
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional oceanic front identification depends on the gradient threshold,and sea areas with a gradient value greater than the set threshold are regarded as ocean fronts.However,the threshold is set artificially according to inconsistent standards,and complex ocean fronts cannot be accurately identified based on a single threshold.To address the problems,this paper proposes an adaptive gradient threshold recognition method for ocean fronts based with deep learning.It annotates the sea temperature gradient map,and obtains a model that can identify ocean fronts at the pixel level through Mask R-CNN training.The unique gradient value distribution of each type of front is counted as the benchmark gradient threshold of the front,and based on this threshold the pixel-level front recognition results are finely adjusted.The accuracy of the front recognition results is quantified to improve the reliability of adaptive front adjustment process.Experimental results show that compared with the traditional gradient threshold method and pure deep learning,this method can realize fine ocean front recognition automatically,and has good independence and integrity.
  • ZHANG Di, LU Jianfeng
    Computer Engineering. 2020, 46(10): 275-281,288. https://doi.org/10.19678/j.issn.1000-3428.0056292
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the segmentation effect of semantic segmentation networks for monocular images on regions where image depth vary.To address the problem,this paper proposes a semantic segmentation model combining the depth information of binocular images and cross level features for complementary application.With no changes to its structure,the existing monocular twin network is used to extract two-dimensional information of input left and right binocular images,and to design color depth fusion module based on ParallelNet.On this basis,the similarity of different parallax levels of binocular image feature points is calculated to extract depth information,which is fused with the two-dimensional information to obtain depth features.At the same time,the cross-level feature attention module is used to get the accurate information of low-level category boundary under the guidance of high-level semantic information,so as to improve the utilization rate of each scale of features and the accuracy of edge regions.Experimental results show that compared with the traditional ParallelNet binocular benchmark model,the proposed model increases the mean Intersection over Union(mIoU) and the Pixel Accuracy(PA) by 3.67 and 3.32 percentage points respectively,and the segmentation of similar regions such as fences and traffic signs is more detailed and accurate.
  • GAO Hongwei, HAN Xiaohong, ZHOU Daoxiang
    Computer Engineering. 2020, 46(10): 282-288. https://doi.org/10.19678/j.issn.1000-3428.0055927
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing supernova detection methods suffer from poor contrast of images and difficult feature extraction caused by the complex image background, small size of object,and imbalance between positive and negative samples.To address the problem,this paper proposes a supernova detection method by improving the Faster R-CNN algorithm from the perspective of data integration and optimization of feature extraction network.The method synthesizes each group of images to improve their contrast.To reduce the difficulty of feature extraction,the deep residual network is used to extract the features of the synthesized images,and the top layer features are fused with the lower layer features to construct the feature pyramid network,so that each layer of the network has strong semantic information.At the same time,the Online Hard Example Mining(OHEM) method is used to train the high loss samples to deal with the imbalance between the positive and negative samples,so as to significantly improve the detection performance of the algorithm.Experimental results show that compared with the original Faster R-CNN algorithm,the proposed algorithm has improved the Score by 8.51% and F1 score by 45.52%.It has better detection performance and generalization ability.
  • Development Research and Engineering Application
  • WANG Liang, WANG Min, WANG Xiaopeng, LUO Wei, FENG Yu
    Computer Engineering. 2020, 46(10): 289-293,300. https://doi.org/10.19678/j.issn.1000-3428.0055694
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Network queuing delay is of great significance for understanding network bandwidth utilization and analyzing congestion level.However,traditional delay measurement technology has poor timeliness and accuracy in predicting network traffic and round-trip delay,and it is easy to ignore sudden network delay changes.Combined with the fine-grained characteristics and variability of queuing delay in the internal network of switch,this paper proposes a multi-time scale fusion prediction method based on LSTM model.In-band network telemetry technology is used to obtain and transform fine-grained network parameters to provide delay and utilization characteristics for the prediction model.A multi-time-scale fusion prediction model(LSTM-Merge) based on Long Short-Term Memory(LSTM) network is constructed to fuse data of different sampling scales,and the flow calculation framework is used to predict the network queuing delay.Experimental results show that the Root Mean Square Error(RMSE) of the prediction results of the LSTM-Merge model is smaller than that of the LSTM,SVR and other models.Also,the real-time performance and accuracy of the prediction results of the three time scales fusion model are better than those of other scales.
  • LI Zijian, CHI Chengying, ZHAN Xuegang
    Computer Engineering. 2020, 46(10): 294-300. https://doi.org/10.19678/j.issn.1000-3428.0055669
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the field of Natural Language Processing,clause processing of Southeast Asian languages such as Thai is a challenging task.Therefore,sequence tagging model is applied to sentence segmentation and a sentence boundary automatic recognition model based on bidirectional Long Short-Term Memory cycle neural network is proposed.The words or characters in Thai sentences are transformed into vectors with different dimensions by using Glove word vector technology,and then the word vectors or character vectors are combined into a sentence vector and are input into the model for training.On this basis,the context information is captured through the bidirectional network structure to achieve better sentence segmentation effect.The experimental results show that the model is very accurate in the task of sentence segmentation in Thai.
  • ZHU Mingjian, FAN Yuan, ZHANG Chengxiao
    Computer Engineering. 2020, 46(10): 301-307,314. https://doi.org/10.19678/j.issn.1000-3428.0055040
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper proposes a dynamic Event Triggering Mechanism(ETM) based on the output feedback of the state observer for linear systems whose internal state is unpredictable.The method uses a state observer to estimate the internal state and designs an event triggering mechanism.Then Lyapunov theory is used to obtain two Linear Matrix Inequalities(LMIs) that make the system asymptotically stable,and based on the solution to LMIs the parameters and event triggering conditions of the controller are designed.At the same time,a dynamic event triggering mechanism is proposed by introducing internal dynamic variables.Experimental results show that this dynamic event triggering mechanism can avoid the existence of Zeno behavior,and the correctness and validity of the theory are illustrated by simulation examples.
  • WANG Chongren, WANG Wen, SHE Jie, LING Chen
    Computer Engineering. 2020, 46(10): 308-314. https://doi.org/10.19678/j.issn.1000-3428.0056119
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the accuracy of credit risk assessment,based on the user behavior data of the Internet industry,this paper proposes a personal credit scoring method based on fused deep neural network combining Long Short-Term Memory(LSTM) neural network and Convolutional Neural Network(CNN).The behavior data of each user is encoded to form a matrix that includes the time dimension and the behavior dimension.By fusing the two sub-models,LSTM model and CNN model based on the attention mechanism,the sequence features and local features are extracted from the original user behavior data.Experimental results on real datasets show that the proposed method outperforms the traditional machine learning methods and the single LSTM convolutional neural network method in terms of KS index and AUC index,demonstrating the effectiveness and feasibility of this method in the field of personal credit scoring.
  • CHEN Min, WANG Raofen
    Computer Engineering. 2020, 46(10): 315-320. https://doi.org/10.19678/j.issn.1000-3428.0055932
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The automatic classification of arrhythmia is very important for the diagnosis and prevention of cardiovascular diseases.Traditional classification methods require artificial feature extraction of ECG signals,which has a great impact on the accuracy of classification.To solve this problem,a classification method based on two dimensional image and Transfer Convolutional Neural Network(TCNN) is proposed.The ECG signal is transformed into a two-dimensional image by means of Gramian Angular Field.The integrity of the ECG image is guaranteed while the time dependence of the original signal is retained.On this basis,combined with the idea of transfer learning,a TCNN model with simple structure and fewer parameters is designed to classify the ECG images.The experimental results show that the network training time of this method is less,and the classification accuracy reaches 99.82%,which can realize the effective classification of arrhythmia.