Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 May 2020, Volume 46 Issue 5
    

  • Select all
    |
    Hot Topics and Reviews
  • HE Jun, ZHANG Caiqing, LI Xiaozhen, ZHANG Dehai
    Computer Engineering. 2020, 46(5): 1-11. https://doi.org/10.19678/j.issn.1000-3428.0057370
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Multimodal Fusion Technology(MFT) for Deep Learning(DL) refers to the conversion and fusion of information obtained by machine from texts,images,voices,videos and other materials,so as to improve the performance of the model.The universality of modals and the heat of DL boost the rapid development of multimodal fusion.In order to improve the performance of DL model classification or regression,this paper summarizes the multimodal fusion architecture,fusion methods and alignment technologies in the early stage of MFT development.This paper focuses on the analysis of the three fusion architectures:joint,cooperative and codec architectures,in terms of their adoption in DL and advantages/disadvantages.The specific fusion methods and alignment technologies such as Multiple Kernel Learning(MKL),Graphic Model(GM) and Neural Network(NN) are also studied.Finally,the public datasets commonly used in multimodal fusion research are summarized,and the direction of further research in cross-modal transfer learning,resolution of modal semantic conflicts,and multimodal combination evaluation is prospected.
  • WU Zhaoqi, ZHANG Fan, GUO Wei, WEI Jin, XIE Guangwei
    Computer Engineering. 2020, 46(5): 12-18. https://doi.org/10.19678/j.issn.1000-3428.0055996
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Mimic defense technology in cyberspace builds a dynamic heterogeneous redundant system architecture to improve the security performance of the system.In this procedure of defense,the voting mechanism of the arbiter is an important step which directly affects the security and efficiency of the mimic system.Based on the task characteristics of the voting process,this paper improves the consistent voting algorithm and proposes a mimic arbitration optimization method based on heterogeneous degree of the executors.By combining the heterogeneous characteristics in the mimic defense system,introducing the inter-executor heterogeneity as the decision factor when choosing the executor for voting output,and considering the number of executors and historical records,the voting algorithm is made more applicable to the threat scenarios faced by mimic architecture.Experimental results show that,compared with the consistent voting algorithm,the proposed algorithm can significantly improve the security performance of the mimic system and effectively suppress the risk of common mode escape.
  • ZHAO Jihong, WU Doudou, QU Hua, JI Wenjun
    Computer Engineering. 2020, 46(5): 19-25,33. https://doi.org/10.19678/j.issn.1000-3428.0055028
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The huge data interaction between sensor nodes in the Internet of Things(IoT) intensifies the problem of excessive energy consumption.However,the traditional energy-aware algorithm cannot be applied to the IoT environment with unbalanced node energy consumption.To address this problem,the energy consumption model based on wireless sensor network is redefined to ensure the minimum energy consumption while taking heterogeneity of nodes and timeliness of links into consideration.On this basis,this paper proposes an improved energy-aware virtual network mapping algorithm.In the node mapping phase,based on the principle of the closest remaining capacity,the virtual nodes are mapped to the same type of physical nodes with the least energy consumption and the appropriate resources are allocated to the links with different latency.Simulation results show that compared with the EA-VNE and EA-VNEH algorithms,the proposed algorithm can improve the utilization rate of underlying resources and reduce energy consumption of virtual network mapping by means of resource integration.Moreover,with the increase of introduced parameters,this algorithm can achieve more fine-grained resource allocation.
  • RONG Bin, WU Zhihao, LIU Xiaohui, ZHAO Yiji, LIN Youfang, JING Yizhen
    Computer Engineering. 2020, 46(5): 26-33. https://doi.org/10.19678/j.issn.1000-3428.0056316
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traffic flow prediction is an important part of intelligent transportation systems.However,due to the influence of traffic conditions,geographical location,time and other factors,traffic flow prediction is highly non-linear and complex,which imposes a great challenge to accurate prediction.This paper proposes a novel Contextual Gated Spatio-Temporal Multi-Graph Convolutional Network(CG-STMGCN) model to predict the inflow and outflow of traffic stations.In this model,the neighborhood graph and the flow-wise graph are constructed based on the adjacent relationships and flow-wise relationships between stations to represent the proximity correlations and flow dependencies between station flows.Then a contextual gated spatio-temporal convolutional module is constructed on two graphs to capture the spatio-temporal features of station flows.Finally,Hadamard product is used to fuse the outputs of the two graphs as the final prediction result.Experimental results on the dataset of real traffic stations show that the proposed CG-STMGCN model outperforms other existing prediction models in terms of prediction performance,and has better stability.
  • HUANG Fengming, TU Shanshan, MENG Yuan
    Computer Engineering. 2020, 46(5): 34-40. https://doi.org/10.19678/j.issn.1000-3428.0055025
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Fog computing extends the computing power and data analysis applications of cloud computing to the edge of the network,meeting the requirements of low latency and mobility of the Internet of Things(IoT) devices,but at the same time generating new data security and privacy protection problems.The Attribute-Based Encryption(ABE) technology in the traditional cloud computing is not suitable for the IoT devices with limited computing resources in the fog environment,and the attribute change is difficult to manage.Therefore,this paper proposes an ABE scheme that supports encryption and decryption outsourcing and revocation in fog computing.A three-layer system model of "cloud-fog-terminal" is constructed,and by introducing the technology of the attribute group key,the key is dynamically updated,thus the requirement of immediate cancellation of the attribute is satisfied.Some complicated encryption and decryption operations in the terminal device are outsourced to the fog node,which greatly improves the calculation efficiency.Experimental results show that the scheme has better computational efficiency and reliability than KeyGen and Enc schemes.
  • Artificial Intelligence and Pattern Recognition
  • WEN Xiuxiu, MA Chao, GAO Yuanyuan, KANG Zilu
    Computer Engineering. 2020, 46(5): 41-46. https://doi.org/10.19678/j.issn.1000-3428.0054094
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address complex nested relations between named entities and overlapping boundaries of adjacent named entities caused by mislabeling in corpus,this paper proposes a method of Chinese overlapping Named Entity Recognition(NER).First,a hierarchical clustering algorithm based on random merging and splitting is used to divide the labels of overlapping named entities into different clusters to build one-to-one relations between words and entity labels,which prevents the clustering of entity labels from falling into local optimization.Then,a Bidirectional Long Short Term Memory-Conditional Random Fields(BiLSTM-CRF) model integrating Chinese radicals is used in each label clustering to improve the stability of overlapping NER.Experimental results show that the proposed method can effectively avoid the impact of mislabeling on recognition through label clustering,improving the F1 value by 0.05 compared with the existing methods.
  • WU Changming, ZHAO Xingtao, LIU Kexin
    Computer Engineering. 2020, 46(5): 47-53. https://doi.org/10.19678/j.issn.1000-3428.0053894
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Features selection is commonly used in dimensionality reduction of machine learning,but existing unsupervised feature selection algorithms often ignore the influence of ordinal locality on feature selection while preserving the local structure of dimensionality of data samples.To address the problem,this paper proposes an improved Simultaneous Orthogonal Basis Clustering Feature Selection(SOCFS) algorithm based on triplet ordinal locality.The algorithm uses the local structure of triplets in data to construct ordinal relationships between data,and preserves the locality of such relationships in feature selection.On this basis,the features that can preserve local structure and have high discrimination for judgment are selected.Experimental results show that the improved algorithm outperforms traditional unsupervised feature selection algorithms in terms of clustering performance and convergence speed.
  • PAN Liangchen, WU Xinran, YUE Kun
    Computer Engineering. 2020, 46(5): 54-62. https://doi.org/10.19678/j.issn.1000-3428.0054183
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address complex iterative computations,large-scale intermediate results and ineffective inference of user preference modeling from high dimensional and sparse user rating data,this paper proposes a user preference modeling method based on Deep Belief Network(DBN) and Bayesian Network(BN).The DBN is used to classify rating data,and the latent variables are used to represent user preferences that cannot be directly observed.Then,the BN with latent variables is used to describe the uncertain dependences among related attributes in rating data.Experimental results on MovieLens and DianPing datasets show that the proposed method can effectively describe the dependences relationship between attributes related to user preferences in rating data,and its precision and execution efficiency are higher than that of Latent Variable Model(LVM).
  • CHEN Wenjie, WEN Yi, ZHANG Xin, YANG Ning, ZHAO Shuang
    Computer Engineering. 2020, 46(5): 63-69,77. https://doi.org/10.19678/j.issn.1000-3428.0054196
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    It is difficult for traditional representation methods based on translation models for knowledge graph to deal with complex relationships such as one to many,many to one and many to many relations.Also,they usually neglect the network structure and semantic information of knowledge graph when studying triples.To solve these problems,this paper proposes a TransGraph model based on TransE.The model learns triples and the network structure features of knowledge graph at the same time,so as to enhance the representation performance of knowledge graph.On this basis,a cross training mechanism of vector sharing is proposed in order to realize the deep fusion of network structure information and triple information.Experimental results on open datasets show that the HITS@10 and accuracy of TransGraph are significantly improved in link prediction and triple classification compared with the TransE model.
  • WU Tao, REN Shuxia, ZHANG Shubo
    Computer Engineering. 2020, 46(5): 70-77. https://doi.org/10.19678/j.issn.1000-3428.0054035
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to efficiently excavate and analyze the complex network,this paper proposes NIIET,a new filtering and compression algorithm for complex network based on triangle subgraph.NRSA,a node importance ranking algorithm is designed to select nodes with high and low importance,which are then filtered to reduce computing scale and compression time.The nodes at both ends of the edge and their common nodes are listed to form the triangle subgraph set.On this basis,the compression of complex network is accomplished by analyzing the triangle subgraph set.Experimental results show that the ranking of NRSA algorithm is reasonable and reliable.Compared with the Node_iterator algorithm,the NIIET algorithm can shorten the compression time,improve compression rate and retain most of the structure and information of the original network.
  • CHEN Jianping, ZHOU Xin, FU Qiming, GAO Zhen, FU Baochuan, WU Hongjie
    Computer Engineering. 2020, 46(5): 78-85,93. https://doi.org/10.19678/j.issn.1000-3428.0054557
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Aiming at the problem of poor convergence stability caused by overestimation of Depth Q-Network(DQN) algorithm,on the basis of traditional Temporal Difference(TD),the concept of n-order TD error is proposed and a dual-network DQN algorithm based on second-order TD error is designed.A value function updating formula based on second-order TD error is constructed.Meanwhile,a two-network model is established in combination with DQN algorithm,and two isomorphic value function networks are obtained,which are respectively used to represent the value functions of two successive rounds,and the network parameters are cooperatively updated to improve the stability of value function estimation in DQN algorithm.Experimental results based on the Open AI Gym platform show that,the proposed algorithm has better convergence stability compared with the classical DQN algorithm in solving the Mountain Car and Cart Pole problems.
  • YIN Mingming, SHI Xiaojing, YU Hongfei, DUAN Xiangyu
    Computer Engineering. 2020, 46(5): 86-93. https://doi.org/10.19678/j.issn.1000-3428.0054793
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Nowadays,research in sentence summarization mainly focuses on monolingual materials,which means the source sentences and the target summarized phrases are in the same language,reducing the availability of information from texts in different languages.To solve the problem,this paper proposes a cross-lingual sentence summarization system.The system borrows the idea of back translation,using the neural machine translation system to translate the source end of parallel corpus of monolingual sentence summarization into another language.Then the translation is combined with summarized phrases in the target end of the parallel corpus of sentence summarization to construct a cross-lingual pseudo parallel corpus.On this basis,the contrastive attention mechanism is used to obtain most irrelevant information from the sequences of the source end and target end,solving the mismatching of lengths of source sentences and target sentences in the traditional attention mechanism.Experimental results show that compared with pipeline-based monolingual sentence summarization systems,the proposed cross-lingual system can generate more fluent summarized phrases that match the representation of human languages and are closer to the level of monolingual sentence summarization.
  • ZHU Jiang, BAO Chongming, WANG Chongyun, ZHOU Lihua, KONG Bing
    Computer Engineering. 2020, 46(5): 94-101,108. https://doi.org/10.19678/j.issn.1000-3428.0054340
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Structure holes refer to the nodes located in the key positions for information diffusion in social network,which have significant influence on public opinion control,influence analysis and information promotion in social network.In order to find structure holes in social network quickly and accurately,this paper proposes a discovery algorithm for Top-k structure holes based on the Shortest Path Increment of Graph(SPIG).The algorithm calculates and analyzes the SPIG of nodes,the Number of Connected Components(NCC) and VAR,so as to determine the value of structure hole attributes.Based on this value,the nodes are sorted to find the Top-k structure holes.Also,the Betweenness Centrality(BC) algorithm is used to filter and select the nodes,so as to significantly reduce the time complexity of the proposed algorithm.Experimental results on real networks and different sizes of LFR simulated complex networks show that the proposed algorithm is more efficient in structure hole detection than classic structure hole discovery algorithms.
  • WANG Yi, SHEN Yang, DAI Yueming
    Computer Engineering. 2020, 46(5): 102-108. https://doi.org/10.19678/j.issn.1000-3428.0054436
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    It is hard for single-channel Convolutional Neural Network(CNN) that take word vectors as the input to fully utilize the information of text features and accurately recognize polysemantic of Chinese texts.To address the problem,this paper proposes a fine-grained multi-channel CNN model.It uses word2vec for pre-training of word vectors.Three different channels are used for convolution operations,which are original word vector,part-of-speech pair word vector that combines word vector with part of speech representation,and fine-grained character vector.Part of speech is labeled to disambiguate words,and character vectors are used to discover hidden semantic information.On this basis,convolutional kernels of different sizes are set to learn the features of higher-level abstractions in sentences.Simulation results show that compared with traditional CNN models,this model can significantly improve the accuracy and F1 value of sentiment classification.
  • CHEN Heng, HAN Yuting, LI Guanyu, WANG Jinghui
    Computer Engineering. 2020, 46(5): 109-114. https://doi.org/10.19678/j.issn.1000-3428.0054825
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the lack of relationship facts and insufficient mining of hidden knowledge in the current Knowledge Graph(KG),this paper proposes a relationship reasoning algorithm based on the semantic combination of multi-hop relational path.The algorithm embeds KG into a low-dimensional vector space and uses reinforcement learning for path discovery,so that the vectors corresponding to the entities and relationships in the path are used as the input of the Recurrent Neural Network(RNN).After iterative learning,the result vector of the semantic combination of multi-level relation path is output,and the similarity between the result vector and the target relation vector is calculated.Experimental results on the FB15K-237 and NELL-995 datasets show that the factual prediction results of the proposed algorithm are 0.314 and 0.417 respectively,which are better than the PRA,TransE,and TransH models.
  • Advanced Computing and Data Processing
  • TIAN Lu, CAO Fuyuan, YU Liqin
    Computer Engineering. 2020, 46(5): 115-121. https://doi.org/10.19678/j.issn.1000-3428.0054536
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of the existing algorithms for matrix datasets obtain clustering results by randomly selecting the initial cluster center.In order to overcome the influence of different initial cluster centers on the clustering results,this paper proposes a new initial cluster center selection algorithm for categorical matrix data.The density of the matrix objects and the distance between the matrix objects are defined according to the frequency of the attribute values,and the maximum and minimum distance algorithms are extended to realize the selection of the initial cluster center.Experimental results on seven real datasets show that the algorithm has better clustering effect than the initial cluster center selection algorithms CAOICACD and BAIICACD.
  • SHANG Lei, LIU Xiping
    Computer Engineering. 2020, 46(5): 122-130,138. https://doi.org/10.19678/j.issn.1000-3428.0055397
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Data Layout(DL) for Scientific Workflow(SW) in cloud environment becomes a hot issue in current workflow research.Considering the many-to-many relationship between tasks and data in scientific workflows,it can be found that the data transmission costs of different data layout schemes are different,which can greatly affect the running cost of workflow.In order to reduce the data transmission costs in SW,this paper proposes a SW DL method based on task assignment and dataset replicas.The method starts with task assignment,assigning these tasks based on quantitative calculation of task dependencies,and then proposes a two-stage DL method based on the dataset replicas according to the assignment result,so as to achieve the optimization of transmission costs in running scientific workflows.Sample results show that this method can effectively reduce the running cost of scientific workflows compared with the workflow layer method.
  • SHI Mingyang, WANG Peng, WANG Wei
    Computer Engineering. 2020, 46(5): 131-138. https://doi.org/10.19678/j.issn.1000-3428.0056243
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Time series segmentation and state recognition is an important time series mining task that can be used to automatically identify the running state of the monitored object,but servel unsupervised time series segmentation algorithms fail to meet the state recognition expectation of users.To address the problem,this paper proposes a supervised time series segmentation algorithm.It constructs a characteristic set and on this basis trains the parameters of the characteristic probability model automatically,so as to build the characteristic Gaussian probability distribution model and design the characteristics of the related sequence.Meanwhile,the matching loss calculation and improved greedy strategy are used to design feature weight constraints,and the segmentation efficiency is increased by using two optimization methods:adding constraints of segmentation positions and incremental calculation.Experimental results on multiple real data sets show that,compared with the pHMM and AutoPlait algorithms,the proposed algorithm can fully express all categories of states and implement more accurate segmentation of time series.
  • ZHOU Sheng, LIU Sanmin
    Computer Engineering. 2020, 46(5): 139-143,149. https://doi.org/10.19678/j.issn.1000-3428.0054753
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address concept drift and noise in data stream classification,this paper proposes a multi-source transfer learning method based on sample certainty.First,the method stores classifiers trained in the multi-source domain.Then the method calculates the category posterior probability and sample certainty of each source domain classifier to each sample in the target domain data block.On this basis,the source domain classifiers of which the sample certainty satisfies the current threshold limit are integrated with target domain classifiers online,so as to transfer the knowledge of multi-source domains to the target domain.Experimental results show that the proposed method can effectively eliminate the adverse effects of noisy data streams on uncertain classifiers,and has better classification accuracy and anti-noise stability than the multi-source transfer learning methods based on accuracy selection integration.
  • LU Shentao, GE Hongwei
    Computer Engineering. 2020, 46(5): 144-149. https://doi.org/10.19678/j.issn.1000-3428.0054716
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Travel-Time based Hierarchical Clustering(TTHC) is a potential energy clustering algorithm,it has a good clustering effect,but the algorithm cannot identify the noisy data points in the dataset.Therefore,this paper proposes an anti-noise travel-time based potential energy clustering algorithm.The parent node of each data point is found through the values of potential energy of each data point and the similarity between data points,and the distance between each data point and the parent node is calculated.Then according to the distance and the values of potential energy of data points, the λ value is obtained.An increasing curve is constructed according to the λ value,and the noise points are identified by finding the inflection points in the increasing curve.The noise data are classified into a new cluster.For the dataset after removing the noise points,the distance between the data point and the parent node is used for hierarchical clustering to obtain the clustering result.Experimental results show that the proposed algorithm can identify the noisy data points in the datasets and thus obtain better clustering effects.
  • WANG Bin, FANG Xinxiu, WEI Tianyou
    Computer Engineering. 2020, 46(5): 150-156. https://doi.org/10.19678/j.issn.1000-3428.0054747
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the low mining efficiency of NFWI,a WN-list based algorithm for weighted frequent itemsets mining,this paper proposes a WDiffNodeset-based weighted frequent itemsets mining algorithm,DiffNFWI.The algorithm extends the data structure of DiffNodeset to get WDiffNodeset,and then combines set enumeration tree with hybrid search strategy to find the weighted frequent itemsets,so as to reduce intersection operations and achieve efficient search.The difference set strategy is used to calculate the weighted support degree of the itemsets to reduce the amount of calculation.Experimental results show that the efficiency of the DiffNFWI algorithm is better than that of the NFWI algorithm on mushroom,pumsb and other datasets.
  • Cyberspace Security
  • QIN Biao, GUO Fan, YANG Chenxia
    Computer Engineering. 2020, 46(5): 157-166. https://doi.org/10.19678/j.issn.1000-3428.0055290
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Static analysis methods are widely used to detect privacy leaks in the Android applications and potential bugs are detected by the form of (Source,Sink),but many false alarms are generated as well.To address the problem,this paper proposes a context-sensitive and field-sensitive taint analysis approach.The operational semantics of taint propagation and the consistent constraints are formally defined to ensure taint propagation to be semantically correct.Trace segments generated after instrumenting and running an Android applications is also analyzed to verify if a potential bug is really true.A prototype system is implemented based on Soot and tested on seventy applications from the DroidBench dataset. Experimental results show that the proposed method can successfully verified four false positives and found eight false negatives,demonstrating that the proposed method is capable of verifying the correctness of static analysis results.
  • Lü Guangqiu, LI Wei, CHEN Tao, NAN Longmei
    Computer Engineering. 2020, 46(5): 167-173,180. https://doi.org/10.19678/j.issn.1000-3428.0054776
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In data-intensive applications such as cryptographic SoC,data transmission speed has gradually become a bottleneck restricting cipher processing performance.To address the problem,this paper proposes an optimized high performance DMA design method for cryptographic SoC based on the characteristics of stream processing in cryptographic SoC.First,a dedicated channel for DMA transfer of a specific module is opened,and data is read/written in parallel to improve the utilization rate of bus bandwidth in DMA transmission of a specific module.Second,a special work mode is added for autonomous control of repeated task transmission,so as to improve the utilization rate of transmission bandwidth.On this basis,dynamic adjustment technology based on multi-channel priority optimization is used to achieve more efficient adaptive transmission under multiple tasks.Simulation results show that the highest frequency of the proposed DMA in the 55 nm process is 910 MHz.The average utilization rate of bus and the coprocessor is 91% and 54% respectively.Compared with the general design of DMA,the proposed design increases the performance of ZUC,SNOW,SM3,SM4 and AES algorithms to cryptographic SoC by 216%,222%,123%,69% and 221% respectively.
  • CAI Rongyan, WANG He, YAO Qigui, HE Gaofeng
    Computer Engineering. 2020, 46(5): 174-180. https://doi.org/10.19678/j.issn.1000-3428.0055037
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to realize the accurate detection of malicious mobile applications and ensure the security of mobile devices,a malicious mobile application detection method based on DNS is proposed.DNS domain name is used as the analysis object of detection to identify the malicious domain name in the network traffic,the time characteristics of DNS request traffic are used to find the associated domain name of the malicious domain name,and the associated domain name is compared with the text classification sample library to determine the name of the malicious mobile application.The experimental results show that this method can be effectively applied to the security protection of mobile devices.The detection rate of this method in the public test data set is 97.1%,and a total of thirteen malicious mobile applications are detected in the actual network deployment,and the number of false positives is 0.
  • LONG Hao, ZHANG Shukui, ZHANG Li
    Computer Engineering. 2020, 46(5): 181-186,192. https://doi.org/10.19678/j.issn.1000-3428.0053980
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In existing privacy protection methods based on the anonymity mechanism and scrambled chaos,malicious users can still infer the activity trajectory of a user from obtained spatial-temporal sensing data.In order to establish an effective chaotic region to protect user privacy and achieve effective data transmission in the chaotic region,this paper proposes a privacy protection method based on Voronoi cells for mobile crowd sensing network.Users establish Voronoi cells and then a chaotic region with other users.The calculated participant representatives in the chaotic region submit sensing data by using data fusion,so as to hide user privacy in the irregular Voronoi cells and chaotic region.Experimental results show that the method can effectively establish irregular chaotic regions,and improve the success rate and efficiency of privacy protection for users.
  • NIU Shufen, LI Wenting, WANG Caifen
    Computer Engineering. 2020, 46(5): 187-192. https://doi.org/10.19678/j.issn.1000-3428.0054302
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address complex bilinear pairing operations and certificate management problems of existing blind proxy re-signature schemes,by using WANG’s scheme for reference,this paper proposes a partially blind proxy re-signature scheme without bilinear pairing based on hard problem of integer factorization.Also,the new formalized definition and security model of the scheme are given under different cryptosystems and the framework of partial blindness.In the random oracle model,the scheme satisfies the unforgeability and partial blindness under adaptive chosen message attacks,and is able to achieve the transparent conversion from the original signer to the proxy re-signer,so as to protect privacy of the original signer and reduce the computational complexity of the partially blind proxy re-signature algorithm,and improve the computational efficiency of the signature verification algorithm.Efficiency comparison and analysis results show that the proposed scheme can ensure applicability while effectively improving partial blindness.
  • WANG Zhong, HAN Yiliang
    Computer Engineering. 2020, 46(5): 193-199. https://doi.org/10.19678/j.issn.1000-3428.0054712
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address security issues of network communication in the post-quantum era,this paper studies the Niederreiter cryptosystem in code-based cryptography,and combines the double public key cryptographic scheme based on the improved Niederreiter scheme with the Xinmei signature scheme to construct an anti-quantum signcryption scheme.Security analysis results show that the proposed signcryption scheme can meet the security requirements of IND-CPA and EUF-CMA,and can achieve excellent defense against direct decoding attacks and ISD attacks.Compared with signcryption schemes that implement encryption after signing,the proposed scheme can reduce the amount of ciphertext by 50%,providing confidentiality and unforgeable security for network communication in the post-quantum era.
  • Mobile Internet and Communication Technology
  • GU Jing, DENG Yifei, ZHANG Xin
    Computer Engineering. 2020, 46(5): 200-206. https://doi.org/10.19678/j.issn.1000-3428.0055183
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the number of communication users increasing,the load of low-power base stations gets unbalanced,resulting in the gradually rising interference of cell edge users followed by reduced communication quality of the whole cell.To address the problem,this paper proposes a HSARSA(λ) algorithm based on heuristic function for dynamic selection of Cell Range Extension (CRE) bias value in dual-layer heterogeneous networks.The heuristic function is used to improve the SARSA(λ) algorithm in Reinforcement Learning(RL),and the algorithm is adopted to find out the optimal CRE bias value,so as to relieve the high hot spot load pressure of the macro base station and improve the network capacity.Simulation results show that compared with the SARSA(λ) and Q-Learning algorithms,the throughput of edge users of the system obtained by the proposed algorithm is improved by 7% and 12% respectively,and the energy efficiency of the system is improved by 11% and 13%,which indicates a significant increase in the communication quality of the system.
  • KONG Feiyue, JIANG Xueqin, WAN Xuefen, CHEN Sijing, CUI Jian, YANG Yi
    Computer Engineering. 2020, 46(5): 207-215. https://doi.org/10.19678/j.issn.1000-3428.0056226
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Enhanced Quasi-Maximum Likelihood(EQML) decoder has better decoding performance for short Low Density Parity Check(LDPC) codes than traditional Belief Propagation(BP) decoder,and can meet the high reliability requirements of 5G mobile communication.However,its decoding speed is greatly reduced due to its complex computational structure.To address the problem,this paper proposes a parallel acceleration scheme based on Graphics Processing Unit(GPU) for EQML decoder.The scheme compresses and stores the parity check matrix of irregular LDPC codes,and resorts the traditional BP decoding algorithms to maximize the utilization of threads in Kernel.Then parallel decoding is implemented for multi-code words in each stage of reprocessing,so as to realize memory access optimization and parallel decoding of streams.Experimental results show that the GPU-based EQML decoder improves the speed by two orders of magnitude compared with the CPU-based decoder,while keeping the error correction performance.
  • XIA Yu, LIU Wei, LUO Rong, HU Shunren
    Computer Engineering. 2020, 46(5): 216-223. https://doi.org/10.19678/j.issn.1000-3428.0054810
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The existing estimation methods based on Link Quality Indicator(LQI) fail to effectively solve the problem of large fluctuation of LQI parameters.Besides,the mapping relationship model between LQI and Packet Receiving Rate(PRR) does not consider the actual physical meaning.Therefore,by inferring the theoretical relationship between LQI and PRR,this paper establishes a hyperbolic tangent model with more actual physical meaning,and proposes a link quality estimation method.On the basis of exponentially weighted Kalman filtering,a more stable LQI estimation is obtained,and then the link quality is quantitatively estimated by the hyperbolic tangent model.Experimental results show that the proposed method can reflect the link quality rather factually.Compared with the LETX and K-CCI method,the estimation error of this method is reduced by 11.21%~52.26% in different quality links.
  • LI Cuiran, ZHANG Wenbo, Lü Anqi
    Computer Engineering. 2020, 46(5): 224-229,239. https://doi.org/10.19678/j.issn.1000-3428.0054973
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When a train of high-speed railway passes through mountain scenes,the wireless signals show obvious fading characteristics due to the uneven distribution of scatterers.To address the problem,this paper proposes a wireless channel modeling method between the base station and the train running at a high speed.In different ranges of the train’s position relative to the base station,Markov chain is used to simulate the state changes of average received Signal to Noise Ratio(SNR).According to the path loss model of high-speed railway in mountainous scenario,the average SNR threshold,quantified value and the channel state transition matrix are calculated to establish the Finite State Markov Chain(FSMC) channel model.Simulation results show that the Mean Square Error(MSE) of the FSMC model proposed in this paper is the smallest compared with that of the channel model established by evenly and unevenly dividing the range of the train’s position,which demonstrates that the proposed model can effectively evaluate the quality of communication between the train and the base station.
  • LIU Yang, JIANG Haibo, WANG Zheng, PANG Zhenjiang, LIU Zhenyao, GAO Chao, HU Chengbo, LU Yongling, SUN Haiquan, XU Jiangtao
    Computer Engineering. 2020, 46(5): 230-239. https://doi.org/10.19678/j.issn.1000-3428.0054908
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to reduce energy consumption and extend the life of sensors in the transmission line monitoring network,this paper proposes a joint optimization strategy for media access control and routing.A Wireless Sensor Network(WSN) communication framework is constructed,based on which an adaptive intra-cluster scheduling strategy is given to reduce the idle monitoring of sensor nodes and thereby to reduce the energy consumption of nodes.An on-demand routing protocol is proposed to ensure optimal inter-cluster routing selection based on energy level and channel quality.With consideration of both the residual energy of cluster head and the distance from cluster head to base station,the non-uniform cluster technology is used to balance the node energy distribution,which prolongs network lifetime.Energy consumption and delay models are constructed for performance evaluation. Experimental results show that the scheme can significantly reduce the data transmission delay while saving energy.
  • ZHOU Kaifu, CHENG Wei, DOU Lichao, PENG Cenxin
    Computer Engineering. 2020, 46(5): 240-246. https://doi.org/10.19678/j.issn.1000-3428.0055075
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Aiming at the problem that the resource allocation methods of Orthogonal Frequency Division Multiple Access(OFDMA) based Decode-Forward(DF) relay system can not take account of both system capacity and user fairness,a new subcarrier and power resource allocation algorithm is proposed.This algorithm is composed of subcarrier allocation and matching and power allocation.In the process of subcarrier allocation and matching,a new synchronous subcarrier difference minimum matching method is designed,which can match the subcarriers of two-hop link pairing to the greatest extent.In the process of power allocation,the power of each subcarrier pair is adjusted by Lagrange method to further improve the transmission rate of the system.Simulation results show that the proposed strategy can give better consideration to both system capacity and user fairness when applied to different OFDMA subcarrier allocation algorithms.
  • Graphics and Image Processing
  • ZHENG Ye, ZHAO Jieyu, WANG Chong, ZHANG Yi
    Computer Engineering. 2020, 46(5): 247-253. https://doi.org/10.19678/j.issn.1000-3428.0056642
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In partial pedestrian re-identification,serious spatial misalignment will be caused when the partial image of a pedestrian is directly compared with the holistic image,leading to a failure in target detection.To solve the mismatch of the partial pedestrian image and the holistic image of the same size,this paper proposes a Pose-Guided Alignment Network(PGAN) model.The PGAN firstly introduces the pose into Pose-Guided Spatial Transformation(PST) module as auxiliary information,extracts the pedestrian image after affine transformation from the partial image and holistic image,and compares the pedestrian image with the standard pose.Then the Convolutional Neural Network(CNN) is used to learn the features for partial pedestrian re-identification.Experimental results on the Partial-REID dataset show that the rank-1 accuracy of the PGAN model reaches 65%,which is 3.7% higher than that of the baseline model that directly extracts the global features with Deep Convolutional Neural Network(DCNN).The results demonstrate the proposed model has excellent performance in partial image alignment and pedestrian re-identification.
  • MA Zhenhuan, GAO Hongju, LEI Tao
    Computer Engineering. 2020, 46(5): 254-258,266. https://doi.org/10.19678/j.issn.1000-3428.0054964
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the inefficient fusion of part of features in full Convolutional Neural Network(CNN) decoder in semantic segmentation,this paper proposes an enhanced feature fusion decoder.The decoder cascades high-level features and low-level features after dimensionality reduction.Then,after the convolution operations,it introduces the attention mechanism of its squared term,and predicts the weights of each channel of its own term and its squared term by convolution.Finally,the weights are enhanced by multiplication and added to get a sum.Experimental results on the pascal voc2012 dataset show that,compared with the original network,the proposed method increases the value of mIoU index by 2.14%.Decoding results under different ways of feature fusion also demonstrates that it outperforms other methods under the same framework.
  • LIANG Meng, SHI Xiaoshuang
    Computer Engineering. 2020, 46(5): 259-266. https://doi.org/10.19678/j.issn.1000-3428.0054708
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Digital camera tends to be disturbed by regular patterns of Color Filter Array(CFA) sampling frequency,leading to Moire fringes in output images.To solve the problem,this paper proposes a Moire fringe elimination algorithm based on second-order Newton interpolation approximation for digital images.The algorithm uses wavelet transform to extract the high frequency information of G component in horizontal and vertical directions,and implements frequency domain transformation on the high frequency information to simulate aliasing process of CFA.The Moire region and its potential region in the image are detected.Then,for the detected Moire regions,the second-order Newton interpolation is used to obtain the estimated value of G component in each direction,and the estimated values are weighted and averaged to recover the lost G component.With the assistance of color difference space model interpolation,the lost R and B componentsare recovered to obtain the final image with full RGB information and without Moire fringes.Experimental results show that the proposed algorithm can effectively eliminate Moire fringes without causing any damage to the color quality of the image.Also,the proposed algorithm has higher Peak Signal to Noise Ratio(PSNR) of recovered images than bilinear interpolation,Hibbard and other algorithms,and has better subjective visual effects.
  • SUN Dongmei, ZHANG Feifei, MAO Qirong
    Computer Engineering. 2020, 46(5): 267-273,281. https://doi.org/10.19678/j.issn.1000-3428.0054134
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In Facial Expression Recognition(FER),sufficient training samples have significant influence on recognition results.To address insufficient database samples in natural environment,this paper proposes a Label-guided Domain Adaption method in Generative Adversarial Network(LDAGAN) for FER by using laboratory environment database samples.This method adopts the generation model of GAN and takes emotional labels as auxiliary condition.Then the method uses the laboratory environment database samples to generate samples similar to those of natural environment database,so as to build a bridge between laboratory environment database and natural environment database,enlarging the natural environment database.The samples assist in learning the emotional features of the natural environment database.Experimental results on the facial expression database of natural environment such as RAF_DB show that the proposed method achieves an improvement of 6% to 9% in FER accuracy compared with Boosting-POOF and PixelDA methods.
  • SHEN Zejun, DING Feifei, YANG Wenyuan
    Computer Engineering. 2020, 46(5): 274-281. https://doi.org/10.19678/j.issn.1000-3428.0054917
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Video tracking is an important direction in research of computer vision.Many tracking algorithms achieve high performance by integrating multiple types of features,but most of them fail to fully exploit the granularity relationship between multiple features.To address the problem,this paper proposes a multi-granularity video tracking algorithm using multi-granulairty correlation filters based on the concept of granular computing.First,the characteristics of video images are divided and the correlation filters based on different granularities are constructed.Then,the correlation filters implement tracking independently,and select the optimal result based on the score of robustness evaluation in each frame.On this basis,the tracking results of each frame are integrated as the final result.Experiments on two open datasets,OTB-2013 and OTB-2015,show that compared with the video tracking algorithm DCFNet,the proposed algorithm has higher accuracy in space and time robustness.It has excellent video tracking performance especially in the case of fast motion,in/out-of-plane rotation and scale change.
  • Development Research and Engineering Application
  • CAO Jiamin, FU Qiwei, ZHOU Qiushi, QIN Xiaowei, CAI Chao
    Computer Engineering. 2020, 46(5): 282-290,297. https://doi.org/10.19678/j.issn.1000-3428.0054263
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper analyzes and studies the characteristics of aircraft operational data in flight rout planning software,proposes a learning method for route planning strategy based on the XGBoost algorithm and K-prototypes algorithm.During sample collection and classification,the features of constraints and operation of planners are analyzed,and constraints are accordingly divided into two categories:constraints of aircraft environment and constraints related to aircraft features.Relevant strategies are learnt by using the XGBoost algorithm and K-prototypes algorithm respectively.Constraints related to aircraft features are further subdivided for more specific learning of complex constraints and classified management of samples.If a flight route does not meet the constraints,the obtained planning strategies are returned to planners to provide strategic guidance.Experimental results show that the proposed method can effectively extract the flight route planning strategies and provide strategic guidance information,which reduces the workloads of planners,improving the efficiency of interactive planning and the intelligence of planning software.
  • DUAN Dagao, LIANG Shaohu, ZHAO Zhendong, HAN Zhongming
    Computer Engineering. 2020, 46(5): 291-297. https://doi.org/10.19678/j.issn.1000-3428.0054243
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Chinese Punctuation Prediction(PP) is an important task of Natural Language Pprocessing(NLP),which can help people eliminate ambiguity and understand the text more accurately.In order to solve the problem that the self-attention mechanism cannot process sequence position information,this paper proposes a Chinese punctuation prediction model based on the self-attention mechanism.This model stacks multiple layers of Bi-directional Long Short-Term Memory(Bi-LSTM) network on the basis of self-attention mechanism,and combines the part of speech and grammar information for joint learning to complete the punctuation prediction.The self-attention mechanism can capture the relationship between any two words without relying on their distance,and the accuracy of predicted punctuation can be improved by part of speech and grammatical information.Experimental results on real news datasets show that the F1 value of the proposed model reaches 85.63%,which is significantly higher than traditional CRF and LSTM prediction methods,and achieves accurate prediction of Chinese punctuation.
  • YIN Jiahao, LIU Shijie, BAO Yu, YANG Xuan, ZHU Ziwei
    Computer Engineering. 2020, 46(5): 298-304,311. https://doi.org/10.19678/j.issn.1000-3428.0054091
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    For the assessment of the acceleration waveform of the external cardiac massage,the existing methods of calculating the depth of cardiac massage using the acceleration waveform integral have the problems of integral drift and error accumulation.On the basis of waveform segmentation and label correction,this paper proposes a recognition algorithm based on one-dimensional convolutional neural network for external cardiac massage waveform.The filtered data is pulse-recognized and the recognized pulse is segmented with the sliding window model to obtain the acceleration waveform of a single massage.Then the data tags are corrected according to the degree of data discretization,which solves the problem of low label credibility.A one-dimensional convolutional neural network model is established and optimized by using learning rate decay and the Adam algorithm.Experimental results show that the one-dimensional convolutional neural network achieves an average accuracy of 99.4%,which is nearly 5% higher than the traditional integral algorithm and BP neural network algorithm.Also,the method is not affected by factors such as massage occlusion and electromagnetic interference,having a good effect on the assessment of external cardiac massage.
  • ZHANG Tingfang, HUANG Hailin, GUO Jinlin, CAO Ming
    Computer Engineering. 2020, 46(5): 305-311. https://doi.org/10.19678/j.issn.1000-3428.0054611
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the communication of the in-vehicle CAN bus,the collision between messages and the waiting delay of low-priority messages seriously affect the stability and real-time performance of the communication.Through the message delay analysis in CAN control system,this paper determines that queuing waiting time is the key factor affecting communication,and combined with the idea of the improved shared clock algorithm and dynamic ID sequence algorithm,it proposes a Shared ID Sequence(SIDS) hybrid algorithm.Node messages are sent according to the ID sequence,avoiding the collision of messages at the same time,eliminating the queue waiting time of the messages,thereby improving the real-time and stability of the network.Simulation results show that the algorithm can avoid the collision of messages,enhance the certainty of messages,and effectively improve network performance.
  • QIU Yu, CHENG Li
    Computer Engineering. 2020, 46(5): 312-320. https://doi.org/10.19678/j.issn.1000-3428.0054483
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traditional recognition methods for named entities do not work well for entities in specific domains,as they usually have more complex structures and types than those in the general domain.To address the problem,this paper takes the fiscal and taxation domain as an entry point to study entity recognition and tagging,so as to implement dynamic expansion of knowledge base.According to the characteristics of the fiscal and taxation domain,a hierarchical entity type set is defined,and a training corpus is obtained by using remote monitoring.Then a deep neural network model based on combined character features and word features is used for entity boundary recognition.Entity type tagging is taken as a multi-label and multi-type classification task,and on this basis a method based on ensemble learning is proposed for entity type tagging.Experimental results on real datasets show that compared with basic methods including logistic regression and support vector machine,the proposed method has higher accuracy,recall and F value.