Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 January 2020, Volume 46 Issue 1
    

  • Select all
    |
  • YANG Guanzhi, CHEN Pengfei, CUI Xinkai, HOU Weiyan
    Computer Engineering. 2020, 46(1): 1-14. https://doi.org/10.19678/j.issn.1000-3428.0055006
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Narrow Band Internet of Things(NB-IoT) is a Low Power Wide Area Network(LPWAN) technology designed for mMTC scenarios.It can handle large scale low power connections and provide large coverage while providing deep indoor penetration.To explore the performance of NB-IoT in practical applications,this paper introduces its standardization process,technical characteristics,physical layer structure and main signaling process,and compares Control Plane(CP) and User Plane(UP) data transmission modes.On this basis,the main indicators of NB-IoT communication quality,namely,signal intensity at different distances,actual transmission rate at upstream and downstream,PING delay,and penetration performance in semi-closed environment,are tested outdoors to describe the structure and working process of the closed-loop communication system.The BC95NB module based on Huawei Boudica120 chip is used for testing.The results showed that,within a range of about 250 m,the signal strength of NB-IoT is -70 dBm~-80 dBm,the average uplast transmission rate is about 4 kb/s,the average downlink transmission rate is about 13 kb/s~18 kb/s,and the PING delay at different distances is controlled at 350 ms~380 ms.
  • DU Shiyu, HAN Meng, SHEN Mingyao, ZHANG Chunyan, SUN Rui
    Computer Engineering. 2020, 46(1): 15-24,30. https://doi.org/10.19678/j.issn.1000-3428.0055747
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    This paper overviews available ensemble classification algorithms for data streams with concept drift,in terms of their basic concepts,related studies,scope of applications and advantages/disadvantages.Among the ensemble classification algorithms,those for sudden drifts,gradual drifts,reoccurring drifts and incremental drifts are analyzed in detail.This paper also focuses on the learning strategy and key techniques used in the algorithms,including Bagging,Boosting,base classifier combination,online learning,block-based ensembles,and incremental learning.Then this paper points out the main problems to be solved by existing ensemble classification algorithms for data streams.At last,this paper analyzes and prospects the directions of further studies,including dynamic updates for ensemble base classifiers,weighted combination of ensemble base classifiers,rapid detection of multi-typed concept drift,etc.
  • WANG Fumin, NI Ming, ZHOU Ming, WU Yongzheng
    Computer Engineering. 2020, 46(1): 25-30. https://doi.org/10.19678/j.issn.1000-3428.0055347
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When using classical approximation algorithm to solve the max-cut problem,the time complexity increases with the complexity of graph.In order to improve the solution efficiency,this paper uses quantum adiabatic approximation algorithm to solve the ground state for Hamiltonian of max-cut problem,and the ground state value is the optimal solution.The time complexity is independent of the number of vertices and edges of graph using quantum adiabatic approximation algorithm,and the max-cut problem can be calculated within finite steps.The computing process is based on the ProjectQ software developed.By establishing the evolutionary path from initial Hamiltonian to max-cut problem Hamiltonian,the change of the expected value is analyzed to determine whether the algorithm can find the optimal solution of max-cut problem or not.Numerical analysis results show that,this algorithm can effectively improve the efficiency of solving the max-cut problem,the accuracy of solution is 0.999 9 for 3-vertex undirected graph and 6-vertex undirected sparse graph,and the results for the 6-vertex undirected complete graph is 0.969 6.
  • YAN Yang, SUN Lijun, ZHU Lanting
    Computer Engineering. 2020, 46(1): 31-37. https://doi.org/10.19678/j.issn.1000-3428.0055105
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The intelligent travel of the new generation intelligent traffic system and the intelligent decision-making of traffic big data need accurate and timely short-term traffic flow prediction.Deep learning can generate features by machine learning technology,which provides a new solution to the short-term traffic prediction.Based on deep learning model,this paper proposes a short-term traffic flow prediction method that combines Convolution-Gated Recurrent Unit(Conv-GRU) and Bi-directional Gated Recurrent Unit(Bi-GRU).The proposed method uses Conv-GRU to extract the spatial feature of traffic flow and Bi-GRU to extract the periodic feature of traffic flow.The extracted features are integrated to obtain the prediction value of traffic flow.Experimental results show that the proposed method can accurately predict the short-term traffic flow.Compared with the Conv-LSTM method,this method has faster convergence speed and shorter running time.
  • YU Jinliang, TU Shanshan, MENG Yuan
    Computer Engineering. 2020, 46(1): 38-44. https://doi.org/10.19678/j.issn.1000-3428.0053943
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In mobile fog computing,the communication between fog nodes and mobile end users is vulnerable to impersonation attacks,thus causing security issues in communication and data transmission.On the basis of the physical layer key generation strategy in mobile fog environment,this paper proposes an impersonation attack detection method based on reinforcement learning.The impersonation attack model in fog computing is constructed and the impersonation attack detection algorithm based on Q-learning algorithm is designed under this model,so as to detect impersonation attacks in a dynamic environment.On this basis,this paper analyzes the False Alarm Rate(FAR),Miss Detection Rate(MDR) and Average Error Rate(AER) of this strategy in the hypothesis testing,so as to judge the performance of the algorithm.Experimental results show that the proposed algorithm can effectively prevent impersonation attacks in a dynamic environment and its detection performance can converge rapidly and reach a stable state.Besides,the proposed algorithm has higher detection accuracy and lower average detection error rate.
  • FU Jian, KONG Fang
    Computer Engineering. 2020, 46(1): 45-51. https://doi.org/10.19678/j.issn.1000-3428.0053562
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    On the basis of the end to end coreference resolution model proposed by LEE et al.,this paper further considers the characteristics of Chinese writing and proposes a Chinese coreference resolution model with structural information.The constituency tree of all sentences is compressed to obtain the leaf node depth of the document compression tree.The Structural Embedding of Constituency Tree(SECT) is used to vectorize the structural information.The part of speech,the leaf node depth and the SECT information are introduced into the model as three eigenvectors for Chinese coreference resolution.The test results on the CoNLL2012 dataset show that the application of the three eigenvectors can effectively improve the Chinese coreference resolution of the proposed model,whose average F1 value can reach 62.33%,which is 5.28% higher than the baseline.
  • XIA Yongsheng, WANG Xiaorui, BAI Peng, LI Mengmeng, XIA Yang, ZHANG Kai
    Computer Engineering. 2020, 46(1): 52-59. https://doi.org/10.19678/j.issn.1000-3428.0053659
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most Point of Interest(POI) recommendation algorithms are susceptible to the influence of time and geographical location,causing incompleteness and ambiguity in related text information of POI.Starting from the correlation between time and geographical location,this paper proposes a POI recommendation algorithm of gated recurrent unit based on time series and distance.The model of time series and related distance information are established on the basis of the gated recurrent unit model.The preference feature of user's access POI is extracted and recommendation of user's POI is made according to this feature.Experimental results on real datasets show that compared with the traditional recurrent neural network algorithm,the proposed algorithm can cover long sequences of user's POI,and the recommendation results are more reliable.
  • ZHANG Zhen, LI Ning, TIAN Ying'ai
    Computer Engineering. 2020, 46(1): 60-66,73. https://doi.org/10.19678/j.issn.1000-3428.0053702
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Stream document structure recognition is important to automatic typesetting optimization and information extraction.The existing rule-based structure recognition method has a poor performance,and the machine learning-based method has a low recognition accuracy rate as it does not consider the long distance dependency between document units.To address the problem,this paper proposes a stream document structure recognition method based on bidirectional Long Short-Term Memory(LSTM) network.The method extracts key features in terms of the format,content and semantics of document units.Then it reduces document structure recognition to sequence labeling,and uses bidirectional LSTM neural network to construct a recognition model to implement recognition of 18 logical labels.Experimental results show that the method can effectively recognize the document structure,and has a better recognition performance than Founder FX software.
  • FU Hanjie, XIONG Yun, ZHU Yangyong
    Computer Engineering. 2020, 46(1): 67-73. https://doi.org/10.19678/j.issn.1000-3428.0053797
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Link prediction is an important application of network analysis.In real scenarios,the network structure evolves with time,so new connections or terminations occur between nodes,resulting in changes in the network structure and deviations within the nodes.In order to improve the link prediction capability,this paper proposes a link prediction algorithm based on node representation with temporal characteristics in dynamic network.The node representation vector at each moment is obtained by calculating the historical representation vector,so as to reflect the variation patterns of nodes in the vector space.Meanwhile,by combing the high order proximity characteristics between nodes,robust node vectors are generated to preserve the network structure.Experimental results on real datasets show that compared with algorithms such as TNE and DHPE,the propose algorithm presents a good performance improvement on link prediction tasks and it can be applied to large scale dynamic networks.
  • WANG Lijuan, LI Keai, HAO Zhifeng, CAI Ruichu, YIN Ming
    Computer Engineering. 2020, 46(1): 74-79,86. https://doi.org/10.19678/j.issn.1000-3428.0053524
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The existing linear regression method cannot effectively deal with noise and outliers.To address the problem,this paper establishes the LR RRM model by combining Low Rank Representation(LRR) and robust regression methods.The LRR method is used to detect noise and outliers in the data in a supervised way.The clean part of data is recovered from the low dimensional subspace of the original data and is used for the classification of linear regression,so as to improve the regression performance.Experimental results on the Extend YaleB,AR,ORL and PIE face datasets show that compared with the standard linear regression model,the robust principal component analysis based linear regression model and the LRR linear regression model,the proposed model has better classification accuracy and robustness on the four original dataset and the dataset with random noise.
  • GAO Ninghua, WANG Heng, FENG Xinghua
    Computer Engineering. 2020, 46(1): 80-86. https://doi.org/10.19678/j.issn.1000-3428.0053773
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve the classification and recognition accuracy of Electrocardiogram(ECG) signals,this paper proposes a classification and recognition method of ECG signals based on time-frequency feature fusion and Dynamic Fuzzy Decision Tree(DFDT).ECG signals are processed successively by periodic segmentation,wavelet packet decomposition and reconstruction,and pattern recognition.The 2-norm of the coefficient matrix of wavelet packet transform is taken as frequency domain features,and is fused with time domain features to represent ECG signals.Fuzzy C-Means (FCM) clustering is introduced into the building of fuzzy decision trees to achieve dynamic partition of feature space.Experimental results on MIT-BIH standard ECG database show that the proposed method has a high classification and recognition accuracy rate,reaching 99.14% for normal and abnormal ECG signals.
  • WU Sifan, DU Yu, XU Shijie, YANG Shuo, DU Chen
    Computer Engineering. 2020, 46(1): 87-92. https://doi.org/10.19678/j.issn.1000-3428.0053381
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Traffic merging models for intelligent vehicle that use discrete action space to describe changing speed cannot meet the application requirements of actual traffic merging scenarios.Deep Deterministic Policy Gradient(DDPG),which integrates policy gradient with function approximation methods and adopts the same network structure as Deep Q-Network(DQN),uses continuous action space for problem description.So DDPG is more suitable for describing the changing speed of intelligent vehicles.On this basis,this paper proposes a traffic merging model for intelligent vehicles based on the DDPG algorithm,reducing the traffic merging problem to a sequence decision problem to be resolved.Experimental results show that compared with DQN-based models,the proposed model has a faster convergence speed,higher reliability and a higher success rate,which means it is more applicable to traffic merging scenarios of intelligent vehicle.
  • LI Yuanhang, CHEN Xianlai, LIU Li, AN Ying, LI Zhongmin
    Computer Engineering. 2020, 46(1): 93-101. https://doi.org/10.19678/j.issn.1000-3428.0053592
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Privacy protection in data mining is one of the research hotspots in the field of information security.To address the classification problem under privacy protection requirements,this paper proposes a random forest algorithm RFDPP-Gini for differential privacy protection.The random forest and differential privacy protection are combined to improve the classification accuracy while guaranteeing the protection of private information.The CART classification tree is taken as a single decision tree in the random forest.The Laplace mechanism and the exponential mechanism are used to add noise and select the optimal splitting feature.Experimental results show that the RFDPP-Gini algorithm can deal with both discrete and continuous features.The classification accuracy on Adult and Mushroom datasets can reach up to 86.335% and 100% respectively,and the magnitude of classification accuracy decline is very slight after noise is added.
  • ZHU Wenfeng, WANG Qin, GUO Zheng, LIU Junrong
    Computer Engineering. 2020, 46(1): 102-107,113. https://doi.org/10.19678/j.issn.1000-3428.0053229
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to improve the effect of Side Channel Attacks(SCAs) on the hardware implementation of block cipher algorithm,and increase the discrimination between correct keys and wrong keys,this paper proposes a SCAs method for block ciphers.This method combines the characteristics of Differential Power Analysis(DPA) attack and zero-value attack,and utilizes as many power components as possible through classification,thus obtaining all keys by attacking.Then,the AES hardware circuit is implemented on the FPGA and experiments are carried out.The results show that the proposed method successfully recovers all keys in 200,000 full random plaintext curves.Besides,the correct keys and wrong keys are more distinguishable in this method than those in the DPA attack method.
  • SHI Zhicai, WANG Yihan, ZHANG Xiaomei, CHEN Jiwei, CHEN Shanshan
    Computer Engineering. 2020, 46(1): 108-113. https://doi.org/10.19678/j.issn.1000-3428.0052324
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The Radio Frequency Identification(RFID) grouping-proof protocol is used to verify whether there are multiple tags,because in actual scenario,these tags are used for the identification of one object.However,due to the simple structure and limited computing and storage resources of the RFID tags,the security protocol is difficult to achieve.To address this problem,this paper proposes a grouping-proof protocol that insures privacy and forward security.First,this paper uses hash function and randomization to guarantee the confidentiality and privacy of each session.Then,by means of activate-sleep mechanism,filter-response mechanism and the combination of identity authentication and grouping-proof technology,the efficiency of protocol is improved.Analysis results show that the RFID grouping-proof protocol meets the requirements of anonymity and forward security,so it can prevent eavesdropping,tracking attack,replay attack and de-synchronize attack.
  • ZHAO Nan, ZHANG Guoan, GU Xiaohui
    Computer Engineering. 2020, 46(1): 114-120,128. https://doi.org/10.19678/j.issn.1000-3428.0053606
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In Vehicular Ad-hoc Network(VANET),problems such as vulnerable communication data and low computational efficiency are ubiquitous.This paper proposes a certificateless aggregation signature scheme based on certificateless public key cryptosystem and aggregation signature.The proposed scheme can resist two different types of adversary attacks under the random oracle model.Besides,it can further prove the unforgeability of communication messages under the adaptive chosen message attack.Moreover,the vehicle nodes can communicate through pseudo-identities generated by trusted authorities,thus achieving the traceability and anonymity of user communication.On the basis of bilinear pairing operations,the proposed scheme can support aggregation verification of multiple messages by aggregation signature.Simulation results show that compared with other certificateless aggregation signature schemes,the proposed scheme has higher communication efficiency in the large traffic flow section,and it can effectively achieve the privacy protection of vehicle users during VANET communication on urban roads.
  • DING Wei, ZHANG Qianfeng, ZHOU Wenfeng
    Computer Engineering. 2020, 46(1): 121-128. https://doi.org/10.19678/j.issn.1000-3428.0053065
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Based on an Intrusion Detection System(IDS),this paper proposes a response scheme for User Datagram Protocol(UDP) reflection attacks from 5 kinds of UDP reflection attack amplifiers,including Character Generator Protocol(CharGen),Domain Name System(DNS),Network Time Protocol(NTP),Simple Network Management Protocol(SNMP) and Simple Service Discovery Protocol(SSDP).After the reflection attack amplifier is located,the scheme combines Software Defined Network(SDN) on the network boundary with response rules based on OpenFlow tables to filter control command messages,so UDP reflection attacks can be prevented.Test results on the network boundary of Nanjing main node of China Education and Research Computer Network(CERNET) demonstrate the operability and effectiveness of the proposed response scheme.
  • YE Qing, WANG Mingming, TANG Yongli, QIN Panke, WANG Yongjun
    Computer Engineering. 2020, 46(1): 129-135,143. https://doi.org/10.19678/j.issn.1000-3428.0053213
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the computational complexity of trapdoor generation of Hierarchical Identity-Based Encryption(HIBE) scheme on lattices under the standard mode,this paper proposes a HIBE scheme based on programmable hash function.First,the trapdoor is generated by MP12 trapdoor function.Then,the master public key,the master private key and the ciphertext are obtained by programmable hash function.Experimental results show that compared with the HIBE scheme with fixed dimension under the standard mode,the computational complexity of trapdoor generation of this method is significantly diminished,and the length of the main public key is reduced to O(logbn),which can satisfy the INDr-aID-CPA security.
  • YANG Peian, LIU Baoxu, DU Xiangyu
    Computer Engineering. 2020, 46(1): 136-143. https://doi.org/10.19678/j.issn.1000-3428.0051157
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    New network attacks are getting more covert and persistent with a high proliferation,resulting in a sudden increase in the difficulty of attack recognition and detection. To improve the efficiency and accuracy of network attack recognition,this paper proposes a portrait analysis method of threat intelligence for attack recognition. Based on the Killchain model and the principles of attack process,this method builds data representation standards for attack graph,so as to build a mining model of transition relationships between threat attribute states. Then the attribute state transition sequence is extracted. On this basis,the this method takes advantages of the Colored Petri Net(CPN) attack graph in causality processing and expression to associate threat attributes,and converts related elements and attributes to an Element Atomic Graph(EAG).The EAG is fused using the element fusion algorithm to implement portrait analysis of threat intelligence.Application results in actual attack analysis demonstrate that the proposed method can improve accuracy of network attack recognition,and shorten the response period of attack recognition.
  • GUO Yue, WANG Hongjun, XIE Mengqi
    Computer Engineering. 2020, 46(1): 144-149. https://doi.org/10.19678/j.issn.1000-3428.0053622
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The location-based service is one of the most potential applications of Internet of Things(IoT).Providing reliable localization information becomes an important indicator to measure the technology standard of IoT.In order to effectively locate the unknown nodes and improve the low localization accuracy and slow convergence speed of Fruit Fly Optimization Algorithm(FOA),this paper proposes an improved localization method of IoT nodes based on fruit fly algorithm.This method adopts the bounding-box algorithm to set the initial range of FOA localization and rebuilds the smell concentration function.Then the appropriate number of measurement nodes and the scale of population are selected to realize the balance between dynamic characteristics and localization accuracy.Experimental results show that compared with algorithms such as FOA and Particle Swarm Optimization(PSO),the proposed algorithm has better localization accuracy and convergence speed.Moreover,the improved stability can meet the localization requirements.
  • XU Changbiao, GUO Ruibo, XIAN Yongju
    Computer Engineering. 2020, 46(1): 150-156,163. https://doi.org/10.19678/j.issn.1000-3428.0053787
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address network interference caused by different duplex modes of small base stations in heterogeneous dense cellular network,this paper proposes a deployment scheme of In-Band Full Duplex (IBFD) base stations.Firstly,an optimal model for IBFD base station deployment is built,and its approximated solution is obtained by using hybrid in-band/out-band full duplex mode selection algorithm based on a greedy algorithm.Then an appropriate proportion of in-band backhaul base stations are deployed to maximize the system capacity.Simulation results show that the proposed scheme can effectively coordinate cross-layer and inter-cell interferences caused by small base stations to macro base stations,improving the spectrum efficiency of the system.
  • LIN Chao, ZHENG Lin, ZHANG Wenhui, DENG Xiaofang
    Computer Engineering. 2020, 46(1): 157-163. https://doi.org/10.19678/j.issn.1000-3428.0054163
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Existing abnormal node detection methods for Wireless Sensor Network(WSN) have difficulty in obtaining the statistical model,and the computational cost of large-dimensional data is high.To address the problem,this paper introduces the Random Matrix Theory(RMT) and designs a new abnormal node location algorithm for WSN.The algorithm uses the spatial and temporal features of raw data to construct a big data matrix,and reduces its dimensions using random matrix.On this basis,the algorithm takes average spectral radius as an evaluation index to generally judge whether an exception occurs in the network.The abnormal node is precisely located by using the spectral distribution theorem in RMT and properties of singular value decomposition of covariance matrix.Simulation results show that the proposed algorithm has a high accuracy rate in outlier detection and node location compared with Distributed Fault Detection(DFD) algorithm.
  • WANG Dingding, DING Xu, ZHAO Chong, SHI Lei, HAN Jianghong
    Computer Engineering. 2020, 46(1): 164-171,178. https://doi.org/10.19678/j.issn.1000-3428.0052809
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Devices for energy replenishment are introduced into Wireless Sensor Network(WSN) to extend its life cycle.To increase the vacation time of these devices,this paper designs a base station deployment strategy for WSN.First,clustering is performed on sensor nodes,and the total traffic of sensor nodes in each area is calculated to determine coordinates of base stations.Then the strategy constructs a cross-layer optimization problem aiming at maximizing the vacation time ratio of energy replenishing devices,and converts it into a linear programming problem of equal superiority to obtain optimal configuration of sensor nodes and wireless energy replenishing devices.Simulation results show that the proposed strategy can provide a 75% increase in vacation time ratio of wireless energy replenishing devices compared with fixed base station deployment strategy.
  • CHEN Qiuyao, ZHENG Quan
    Computer Engineering. 2020, 46(1): 172-178. https://doi.org/10.19678/j.issn.1000-3428.0053854
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Named Data Network(NDN) caching strategies often pay little attention to the service types to which the content belongs,or the Quality of Service(QoS) requirements of different service types,so it is difficult to apply these strategies to real case scenarios that have various service types and complex user requirements.In order to make better utilization of limited caching resources,this paper refers to the Diffserv model in IP network and proposes a caching content classification model that can be used in NDN.Besides,this paper also presents a probability caching algorithm DiffCache that considers content classification,router local popularity and content download latency at the same time.Experimental results show that the proposed algorithm can realize dynamic allocation of caching resources and evidently distinguish the performance indicators of each content type without affecting the global hit rate and download latency.
  • TIAN Jiyao, LIU Guangzhong
    Computer Engineering. 2020, 46(1): 179-186. https://doi.org/10.19678/j.issn.1000-3428.0053514
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In Wireless Sensor Network(WSN),the network nodes only have limited power energy,which greatly affects their service life.Therefore,this paper proposes an energy-optimized clustering routing algorithm based on multiple.First,the optimal cluster head is selected based on fuzzy rule algorithm and the combination of the relative residual energy,the relative centrality and the relative density of nodes.Then,this paper introduces the Theil index to improve the probability function of the ant colony algorithm.On this basis,this paper establishes a linear planning model with a comprehensive consideration of node energy consumption and the quality of communication link.Simulation results show that compared with CFEL,LEACH algorithms,the proposed algorithm can extend network life circle,reduce energy consumption and improve load balancing.
  • SUN Zhenyu, SHI Jingyan, SUN Gongxing, DU Ran, JIANG Xiaowei, ZOU Jiaheng, TAN Hongnan
    Computer Engineering. 2020, 46(1): 187-195. https://doi.org/10.19678/j.issn.1000-3428.0053582
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    HTCondor and SLURM computing clusters in high-energy physics computing platforms provide data processing services for many high-energy physics experiments.However,HTCondor is not efficient in parallel job scheduling,and SLURM could not manage massive serial jobs.Also,the overall resource management and scheduling strategies of computing platforms are too simple.To meet the demands of high-energy physics computing clusters running with heavy duties,this paper designs a dual layer job scheduling system,which adds a job management layer on the existing job scheduler.The system is designed to efficiently schedule serial and parallel jobs,ensure fair use of resources between experiment groups,and enable users to implement fine-grained management of jobs.Test results show that the dual layer job scheduling system supports rapid submission of massive high-energy physics jobs,makes full use of resources of a computing platform,and has a high performance in job scheduling.
  • YIN Kangqi, WU Ming, WANG Pengcheng, XU Yun
    Computer Engineering. 2020, 46(1): 196-200,207. https://doi.org/10.19678/j.issn.1000-3428.0053683
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Code completion suggestion can significantly enhance coding efficiency in programming,but existing tools and methods are not effective enough in collecting suitable candidate codes from codes with large size difference.To address the problem,this paper proposes a new code block completion suggestion method based on gapped code clone techniques.By improving a matching algorithm based on sliding windows and error-matching indexes,this method searches for candidate code blocks similar to to-be-completed blocks.Then it performs feature extraction,clustering and similarity-based sorting on candidate code blocks,so as to obtain the order of suggested code blocks.Experimental results show that,compared with the code block completion suggestion method proposed by HILL etc.,this proposed method has a higher accuracy,and is applicable to more code block completion scenarios.
  • SONG Kuangshi, LI Chong, ZHANG Shibo
    Computer Engineering. 2020, 46(1): 201-207. https://doi.org/10.19678/j.issn.1000-3428.0054014
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To improve customization and reduce coupling and resource consumption in large-scale machine learning systems,this paper designs and realizes a lightweight distributed machine learning system with high performance and scalability.The system adopts a modular and layered design,and migrates a variety of mainstream machine learning and deep learning algorithms.Two kinds of efficient extensible gradient synchronization schemes,parameter server and Ring All-Reduce,are proposed to perform parallel training acceleration experiments on the mainstream algorithm models.Experimental results show that the system can achieve excellent scalability and stability for both sparse and dense models.Parameter Server training can achieve similar accuracy and convergence performance to those of standalone training.The proposed Ring All-Reduce can achieve a 6x training acceleration on 8-node model compared with single node model.
  • MEI Hao, DAI Hongbing, LIU Jing
    Computer Engineering. 2020, 46(1): 208-215,221. https://doi.org/10.19678/j.issn.1000-3428.0053552
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In existing embedded Forth operating systems,multitask space cannot be reused and multitask management supports only task creation.To address the problem,this paper proposes a space reuse algorithm for multitask in embedded operating systems based on Forth Virtual Machine(FVM).The algorithm takes the task control block as the header node of the idle task image partition list.The link address variable in the task control block is used to track the background task images deleted by the system,and both collection and redistribution of task image space can be realized by modifying only one user variable pointer.Experimental results show that the proposed algorithm improves memory resource utilization of Forth system while ensuring the stability and inherent features of Forth system.It is applicable to embedded environments with limited resources.
  • ZHANG Wanying, CAO Xiaomei, CHEN Wei
    Computer Engineering. 2020, 46(1): 216-221. https://doi.org/10.19678/j.issn.1000-3428.0053800
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address the environment interaction problem in whitebox fuzz testing,this paper proposes a hidden path search scheme HPSBEF based on external function detection and correction.In the proposed scheme,constraint solving is used to obtain the output value of external function when executing new path and the results are recorded in the linked list.The external function in the executed path is detected and dynamically modified according to the information in the linked list,so as to drive the path and further improve the path coverage rate.Experimental results show that compared with the FMM scheme,the coverage rate and vulnerability detection ability of the HPSBEF scheme are improved,and the time cost is lower.
  • XU Fu, HAO Liang, CHEN Feixiang, LI Dongmei, CUI Xiaohui
    Computer Engineering. 2020, 46(1): 222-228,242. https://doi.org/10.19678/j.issn.1000-3428.0053886
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    As an important software development mode,open source code reuse suffers from two major problems:open source license infringement and synchronous update of code.So this paper designs an efficient incremental analysis method of code warehouse by utilizing the high similarities between code snapshots.On this basis,the Simhash algorithm is used to map the function code into the function fingerprint.Then,the engineering similarity calculation method that takes function as the basic analysis unit is proposed,so as to reduce the storage space of analysis results and improve the speed of code comparison.Three groups of experiments are designed to evaluate the effectiveness of the proposed method from the aspects of code analysis efficiency,engineering similarity determination and function update detection respectively.Results show that the proposed method can meet the needs of similarity detection and code traceability in open source code reuse,as well as effectively reducing the overall analysis time.
  • LIU Yanzhi, CHEN Lifu, CUI Xianliang, YUAN Zhihui, XING Xuemin
    Computer Engineering. 2020, 46(1): 229-235. https://doi.org/10.19678/j.issn.1000-3428.0053726
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to make full use of the scene information of remote sensing image and improve the accuracy rates of scene classification,this paper proposes a scene classification method based on spatial feature recalibration network.The Multi-scale Omnidirectional Gaussian Derivative Filter(MOGDF) is constructed to obtain multi-scale spatial features of remote sensing images.Then,a feature recalibration network is constructed by introducing separable convolution and additional momentum methods,and the bottleneck structure is formed by using the fully connected layer to learn the correlation between the feature channels.The multi-scale spatial features are weighted to achieve the recalibration of features.Finally,combined with the Convolutional Neural Network(CNN) training,the classification results are obtained.Experimental results on UCM_LandUse and airborne SAR image datasets show that the accuracy rates of the proposed method for remote sensing image classification reach 94.76% and 95.38%,respectively,and compared with algorithms such as MCNN,MS-DCNN,PCA-CNN and so on,the accuracy and generalization ability of remote sensing image classification are significantly improved.
  • QI Xiangming, WANG Jiaqi
    Computer Engineering. 2020, 46(1): 236-242. https://doi.org/10.19678/j.issn.1000-3428.0053499
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The stitching of large parallax images can cause double image in overlapping area and perspective distortion in non-overlapping area.To address these problems,this paper proposes an improved large parallax image stitching algorithm.The low density mesh deformation is established by the As-Projective-As-Possible(APAP) algorithm,and the mesh deformation in the overlapping area is subdivided according to the distribution of paired matching points of images to be stitched.The global optimal similar matrix is calculated by random sample consensus algorithm,and the perspective distortion in non-overlapping area is corrected.Then the global optimal similar matrix and the mesh homography matrix are weighted and superposed,so as to realize the deformation of the target image.On this basis,content awareness is executed in the overlapping area of the target image,in which the area of less importance is retained and spliced,thus avoiding the double image problem in the overlapping area.Experimental results show that compared with other algorithms,such as the APAP algorithm and the SPHP algorithm,the proposed algorithm can better restore the real scenario,and with this algorithm,the root mean square error of the stitched image is lower.
  • WANG Rongfeng, HU Min
    Computer Engineering. 2020, 46(1): 243-246,254. https://doi.org/10.19678/j.issn.1000-3428.0052763
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To address heavy and inefficient computations caused by traditional grid point-based regional coverage analysis of satellites,this paper proposes an improved algorithm for regional coverage analysis of satellites.After the coverage band polygon of the satellite is generated and the bounding box of the target region is divided into grids,the scanlines are constructed with grid points on the lines of longitude.The proposed method takes the intersection of scanlines and the target region as initial computational object,and calculates the intersection of the initial computational object and the coverage band polygon to segment scanlines.The data of segmented scanlines are analyzed to obtain indicators including the coverage rate and the number of repeated coverage.Analysis results of the example show that the proposed algorithm reduces the time complexity and space complexity,demanding only 1.19% of computing time of the traditional grid point method when the number of grids exceeds 800 000.
  • ZHAO Ya'nan, WU Liming, CHEN Qi
    Computer Engineering. 2020, 46(1): 247-254. https://doi.org/10.19678/j.issn.1000-3428.0053233
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    One-stage detection algorithms cannot balance precision and real-time performance in small object detection.To address the problem,this paper proposes a small object detection algorithm based on multi-scale fusion Single Shot multi-box Detector(SSD).The algorithm designs a fusion module based on the network structure of SSD and Deconvolutional Single Short Detector(DSSD) algorithms to implement functions of Top-Down structure,and thus enables skip connections between high-level network and low-level network.Then SSD-VGG16 is used to extend convolution feature map to extract multi-scale features,and multivariate data of different convolutional layers,scales and features is classified for prediction and position regression.Experimental results on a fabric defect database show that the detection precision of the proposed algorithm reaches 78.2% and detection speed reaches 51 frame/s,which outperforms SSD,DSSD and other algorithms.The results prove that the proposed algorithm can improve the detection speed with a high precision ensured.
  • KANG Jie, DING Jumin, WAN Yong, LEI Tao
    Computer Engineering. 2020, 46(1): 255-261,270. https://doi.org/10.19678/j.issn.1000-3428.0055495
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    When Convolutional Neural Network(CNN) is used for liver image segmentation with blurred boundaries,the segmentation precision is reduced due to the frequent loss of location information.To address the problem,this paper proposes an automated liver image segmentation algorithm that combines the watershed correction and the U-Net model.The algorithm takes advantages of U-Net in layered learning of image features,so as to achieve fusion of shallow features and deep features without loss of detailed information,such as the location of the target.After the initial result of liver image segmentation is obtained,the boundaries of the initial result is corrected by using blocks formed by the watershed algorithm,so as to obtain a segmentation result with smooth and precise boundaries.Experimental results show that the proposed algorithm can implement more precise liver image segmentation compared with the existing graph-cut algorithm and the Fully Convolutional Network(FCN) algorithm.
  • LIU Chang, ZHANG Jian, LIN Jianping
    Computer Engineering. 2020, 46(1): 262-270. https://doi.org/10.19678/j.issn.1000-3428.0053574
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    To accurately extract high precision edges in complex background images,this paper proposes an improved single-pixel edge extraction algorithm.In the improved fully convolutional neural network,this method adds an auxiliary output layer and adopts a multi-scale input method to coarsely extract multi-pixel edges of an image.Then the watershed algorithm is used to refine and relocate the multi-pixel edges to obtain a high precision single-pixel edge of an image.Application results on magnetic tile images show that the algorithm has strong robustness and can extract complete continuous high precision single-pixel edges.
  • ZHANG Chiming, WANG Qingfeng, LIU Zhiqing, HUANG Jun, ZHOU Ying, LIU Qiyu, XU Weiyun
    Computer Engineering. 2020, 46(1): 271-278. https://doi.org/10.19678/j.issn.1000-3428.0053340
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In the early screening process of lung cancer,the manual diagnosis of chest CT scan image is time-consuming and laborious.The deep learning network seems like an effective solution,but it lacks sufficient medical data for training.To address this problem,this paper proposes a Progressive Fine-Tuning(PFT) strategy,and applies this strategy to the deep transfer learning network for the auxiliary diagnosis of benign and malignant pulmonary nodules.First,the neural network is used to learn feature knowledge in the large dataset of coarse-grained natural images.Then,the learnt feature information is transferred to the small dataset of the fine-grained pulmonary nodule through the reconstructed network classification layer.From the full-connected classification layer to the convolutional layer,the PFT strategy is adopted to release and fine-tune the layers one by one.Finally,the optimal fine-tuning depth is determined according to the quantitative analysis of AUC values of each layer after fine-tuning.Besides,the Gradient-weighted Class Activation Mapping(Grad-CAM) and t-SNE algorithm are used to provide corresponding visual support and interpretation for network prediction results.Experimental results on the LIDC dataset show that the diagnosis accuracy of benign and malignant pulmonary nodules of the proposed method can reach 91.44%,and its AUC value is 0.962 1.
  • LIU Yao, SONG Yuanbin, LI Yunxiang
    Computer Engineering. 2020, 46(1): 279-285. https://doi.org/10.19678/j.issn.1000-3428.0054247
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the problems of model expression and calculation in complex construction projects,this paper conducts a study on the application of three logical relations,including mutual exclusion,coexistence and dependence in the scheduling project expression model.On this basis,a mixed integer linear programming model for complex construction scheduling problem is proposed and an improved genetic algorithm is designed to quickly solve the model.Based on the sequential coding manner of Boolean variable division,the chromosome is divided into independent variable and semi-independent variable coding gene segments,and the reciprocal of the shortest duration is used as the fitness function to search the optimal solution in a heuristic approach.Then,the conflict detection is performed after genetic operations,so as to eliminate individuals who violate constraint rules generated by population initialization,crossover,and mutation operations,thus ensuring the effectiveness of the algorithm.The calculation results of project cases show that compared with the traditional precision algorithm,the proposed algorithm can effectively shorten the solution procedure of construction period in large construction projects.
  • HAO Zhanjun, DUAN Yu, DANG Xiaochao, CAO Yuan
    Computer Engineering. 2020, 46(1): 286-293. https://doi.org/10.19678/j.issn.1000-3428.0053821
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Most of the existing human motion recognition methods have drawbacks such as low recognition accuracy,high cost,and limited identification ability,in that only simple motions can be identified.Therefore,this paper proposes a human complex motion recognition method based on Channel State Information(CSI),which is verified by the actions of traditional martial arts Xing Yi Quan.First,the Wi-Fi network adapter is used to collect the CSI data of Xing Yi Quan.Then,the amplitude of the collected data is used as the characteristic value,and the high frequency and low frequency abnormal values are respectively filtered by the Butterworth low pass filter and the Discrete Wavelet Transform(DWT).In the offline phase,the Restricted Boltzmann Machine(RBM) is adopted for the training and classification of pre-processing data,thus building the fingerprint database of Xing Yi Quan.In the online phase,the Deep Belief Network(DBN) is applied for the classification of collected data,and the classification results are matched with the fingerprints database,so the accurate recognition of Xing Yi Quan actions is realized.Experimental results show that compared with CSI-SRC method and the traditional RSSI model method,the proposed method has better recognition accuracy and robustness.
  • DENG Yujing, WU Zhihao, LIN Youfang
    Computer Engineering. 2020, 46(1): 294-301. https://doi.org/10.19678/j.issn.1000-3428.0053569
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    Accurate prediction of Flight Passenger Load Factors(FPLFs) helps in addressing overbooking and overserving of the flight seats.However,traditional time series-based prediction methods only focus on variation feature of recent daily FPLFs and ignore impacts of other factors,leading to limited prediction performance.To address the problem,this paper proposes a recurrent neural network model using multi-granularity temporal attention mechanism named MTA-RNN.The model constructs a hierarchical attention mechanism to acquire the temporal correlation of FPLFs under different temporal granularities.Also,other factors including the properties of a flight,festivals and holidays are introduced into the model to compute the target FPLFs over a certain period in the future.Experimental results on datasets of real historical FPLFs show that the MTA-RNN model has a higher prediction accuracy than ARIMA,LSTM and Seq2seq models.
  • LIU Hang, LI Yang, YUAN Haoqi, WANG Junying
    Computer Engineering. 2020, 46(1): 302-308. https://doi.org/10.19678/j.issn.1000-3428.0053446
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    The single-channel speech separation based on deep learning needs to calculate the time-frequency masking,which,however,cannot be learnt in the existing methods.Moreover,the time-frequency masking is not encapsulated in in-depth learning for optimization,so it relies on Wiener filtering for subsequent processing.Therefore,this paper proposes a speech signal separation method based on Generative Adversarial Networks(GAN).In the speech generation stage,the recursive derivation algorithm and sparse encoder are introduced to improve the time-frequency generation results.Then,the generated speach is eatered into the discriminator for classification,so as to reduce the disturbance between signal sources.The experimental results show that compared with other speech signal separation methods,such as the codec-based method and the recurrent neural network-based method,the SDR and SIR separation indexes of the proposed method increase by 6.2 dB and 5.0 dB respectively.
  • CHEN Xi, ZHU Xiaodong, GAO Guangkuo, XIAO Fangxiong
    Computer Engineering. 2020, 46(1): 309-314. https://doi.org/10.19678/j.issn.1000-3428.0053116
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    In order to solve the problem of insufficient expression of sentiment information in the TF-IDF model,this paper proposes the Senti model to extract the sentiment information in the text,including positive/negative sentiment words,negative words,transition words and adverbs of degree in the sentences.The sentiment function of punctuations in the sentences is considered herein,and the sentiment dictionary and semantic rules are used to extract sentiment information,thus generating the corresponding sentiment matrix.On this basis,the proposed model is spliced with the TF-IDF model to form a hybrid vector model.Experimental results show that compared with the TF-IDF alone,the hybrid vector model shows higher accuracy and better classification effect.
  • LIAO Shuya, LI Demin, ZHANG Guanglin, XU Mengran
    Computer Engineering. 2020, 46(1): 315-320. https://doi.org/10.19678/j.issn.1000-3428.0053230
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save
    With the increasing number of private cars in the city,a real-time access to parking lots information is of great importance to reduce traffic congestion and invalid cruising.Therefore,in the hybrid internet with different cars,this paper proposes a parking lots information transmission algorithm BMILP base on minimum information loss probability.Then,on the basis of the proposed algorithm,an approach to obtain parking lots information is designed for private cars with parking demands on different road modes.Experimental results show that compared with the BOPBR algorithm,the proposed algorithm can reduce competition probability during parking lots information transmission process and shorten the transmission delay.