Author Login Editor-in-Chief Peer Review Editor Work Office Work

15 March 2019, Volume 45 Issue 3
    

  • Select all
    |
  • ZHANG Haoshenglun,LI Chong,KE Yong,ZHANG Shibo
    Computer Engineering. 2019, 45(3): 1-6. https://doi.org/10.19678/j.issn.1000-3428.0050119
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    A distributed User Browse Click Model(UBM) algorithm is proposed to quickly mine user behavior from massive search click logs.The validation parameter E derived from the original UBM algorithm is only related to the ranking position of the search results and the click position of the previous document,and is very stable.Based on this characteristic,the EM iteration solution is transformed into a distributed UBM algorithm which estimates the test degree by sampling to solve the attraction degree.Results of simulation on Spark data platform show that compared with the original UBM algorithm,the proposed algorithm can solve the serious data skew problem in click log,and has higher efficiency.

  • LIU Biao,WANG Baosheng,DENG Wenping
    Computer Engineering. 2019, 45(3): 7-13. https://doi.org/10.19678/j.issn.1000-3428.0049811
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Cloud computing and container technology bring the convenience of the workflows operation,but there are problems such as management difficulties,insufficient resource utilization,and low intelligence and automation.Therefore,a containerized workflow framework supporting elastic scaling is proposed.On the basis of this,a workflow automatic scaling model based on CPU usage is presented,which can automatically expand the number of containers when the workflow process is overloaded,and reduce the task waiting time.When the task load is reduced,the process can be reduced while ensuring that the task is not lost to save resources and costs.Experimental results show that the number of expansions of the process is positively related to the processing time,which can better eliminate the bottleneck of the workflow.When the workflow is overloaded,the same amount of tasks can be completed in a shorter time.

  • JIANG Meng,YU Minggang,WANG Zhixue
    Computer Engineering. 2019, 45(3): 14-19. https://doi.org/10.19678/j.issn.1000-3428.0052715
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Large-scale ontology mapping in the context of large data has high time complexity,low efficiency and accuracy.Therefore,a multi-strategy adaptive large-scale ontology mapping algorithm based on modularity and local confidence is proposed.Clustering and modularizing the inner part of the system,discovering the correlated sub-ontologies with high similarity between modules based on information retrieval strategy,calculating the local confidence under each mapping strategy among the correlated sub-ontologies,and adjusting the weight of the corresponding strategy adaptively based on the local confidence when combining the mapping results.On this basis,heuristic greedy strategy is used to extract mapping results and correct them based on mapping rules.Experimental results show that compared with Falcon and ASMOV methods,the proposed algorithm has higher recall,precision and F-measure value.

  • ZHANG Wei,WANG Zhijie
    Computer Engineering. 2019, 45(3): 20-25,31. https://doi.org/10.19678/j.issn.1000-3428.0052626
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Distributed system is an ideal choice for processing temporal large data join operation,but the existing distributed system cannot support the original temporal join query and cannot meet the processing requirements of temporal large data with low latency and high throughput.Therefore,a two-level index memory solution scheme based on Spark is proposed.The global index is used to prune the distributed partitions,and the local temporal index is used to query the partitions in order to improve the efficiency of data retrieval.A partition method is designed for temporal data to optimize global pruning.Experimental results based on real and synthetic datasets show that the scheme can significantly improve the processing efficiency of temporal join operation.

  • GAO Jun,HUANG Xiance
    Computer Engineering. 2019, 45(3): 26-31. https://doi.org/10.19678/j.issn.1000-3428.0049976
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The traditional TF-IDF algorithm calculates the correlation weights between keywords and documents only by using the perspective of word frequency and reverse document frequency,which ignoes the influence of user interest on weight calculation.In order to meet the purpose of user information retrieval,a correlation weight algorithm based on journal association is proposed.From the perspective of user-oriented comelation,the user interest model is built by analyzing the user's search journal,and combined with the idea of distributed computing,the MapReduce programming framework is used to realize the parallel processing of computing tasks.Experimental results show that it can not only improve the efficiency of the algorithm when dealing with massive data,but also dynamically change the weight of retrieval word according to the user's historical retrieval records,so as to enhance the interaction ability between users and the system.

  • GAO Quan,WAN Xiaodong
    Computer Engineering. 2019, 45(3): 32-35,40. https://doi.org/10.19678/j.issn.1000-3428.0049606
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that the lookup operation of FP-Growth algorithm has a high time complexity,this paper proposes a new algorithm named LBPFP.The algorithm is based on PFP algorithm,which is added a hash table to the head table to achieve fast access to item and is designed a workload model based on the prefix length to optimize the parallel process and improve the efficiency of the algorithm.The comparison experiments in the webdocs.dat database show that the LBPFP algorithm has better performance than the PFP,HPFP and DPFP algorithms.

  • WU Yinghao,LING Jie
    Computer Engineering. 2019, 45(3): 36-40. https://doi.org/10.19678/j.issn.1000-3428.0049086
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Most existing cloud storage data integrity verification methods have low efficiency and high communication overhead.To solve this problem,an improved data integrity verification method for cloud storage is proposed.The bilinear pairing technique is used to verify the data integrity in order to realize the public verification function.The indexing table mechanism is designed for dynamic verification.The random mask technique is used to improve the security of the method.Analysis and experimental results show that this method can effectively resist malicious attacks on servers,and has lower communication overhead and higher computing efficiency.

  • ZHOU Qi,CHAI Xiaoli,MA Kejie,YU Zeren
    Computer Engineering. 2019, 45(3): 41-46. https://doi.org/10.19678/j.issn.1000-3428.0052189
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Because tensor Tucker decomposition is widely used in image processing,face recognition,signal processing and other fields,Tucker decomposition algorithm becomes a key research object.However,the current popular Tucker decomposition algorithm needs to expand tensors many times,which results in that the acceleration efficiency of the algorithm is mostly consumed in tensor multiple expansion.In order to solve the above problems,a modified Tucker decomposition module applied to CUDA platform is proposed.By optimizing the Tucker decomposition algorithm and CUDA platform,the tensor expansion process is omitted,and the requirements of acceleration system are reduced and the acceleration efficiency is improved.Experimental results show that the modified Tucker decomposition algorithm has better acceleration performance on CUDA platform.

  • TAO Wenjing,LU Yang,WEI Xing,JIA Xiangli
    Computer Engineering. 2019, 45(3): 47-53,59. https://doi.org/10.19678/j.issn.1000-3428.0050662
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that the software implementation obtains the timestamp in the network driver layer of the protocol stack,which is susceptible to the delay of the protocol stack and the jitter,the synchronization accuracy is low.A precise clock synchronization software implementation method is proposed.Based on open source code PTPd2,this paper uses pure software method to implement the IEEE 1588 protocol,and uses the kernel function of Linux system to get the message timestamp in the network driver layer,it can effectively avoid the interference of protocol stack compared with Network Time Protocol(NTP) that get timestamp in application layer,at the same time,Wireshark packet capture software is used to capture the time when the message passes the Media Access Control(MAC) layer and compensate the delay and jitter generated by the message transmitted from the network driver layer to the MAC layer in PTPd2,thereby it improves the clock synchronization accuracy.Experimental results show that when the master-slave clock device is directly connected,the P、I value and the synchronization period are reasonably set and the boundary time is compensated,and the clock synchronization accuracy can reach 19 μs.This method can meet the requirements of most distributed control systems.

  • HAI Meisheng,YI Peng,JIANG Yiming
    Computer Engineering. 2019, 45(3): 54-59. https://doi.org/10.19678/j.issn.1000-3428.0049876
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The Network Function Virtualization(NFV) makes the traditional network function deployed in the general server in the form of the software,and the network operator establishes the service function chain according to the service request,providing service for users.But existing service chain deployment is limited to reduce operator costs without considering the user experience.Therefore,based on the comprehensive consideration of users and service providers,on the basis of the user's requirements for service delay and availability,a heuristic algorithm based on genetic algorithm and tabu search is proposed,which uses the advantages of both to improve the efficiency of the solution and complete the online deployment of the service chain.Experimental results show that compared with ResourceMin and TimeShort,strategy the service deployment success rate and resource utilization rate of the algorithm are increased by 6% and 8% respectively.

  • ZHU Xiaodong,WANG Jinlin,WANG Lingfang,DING Li
    Computer Engineering. 2019, 45(3): 60-64,72. https://doi.org/10.19678/j.issn.1000-3428.0048911
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    There are some problems in future network memory optimization,such as dependence on specific protocols,implementation based on high-level network structure and lack of practical deployment applications.To solve this problem,Protocol Oblivious Forwarding(POF)which is a kind of Software Defined Network(SDN) technology is used,the function and interaction mode between control surface and data surface are designed,and a cooperative storage architecture applied to data link layer is constructed.By extending the POF instruction and processing flow,a collaborative caching method based on this architecture is proposed to utilize the buffer resources of peripheral nodes effectively.Experimental results show that the proposed method can provide transparent collaborative storage support for future network applications and has higher forwarding efficiency than the collaborative caching method based on CCNx,NDN.

  • LI Chaofan,CHEN Qingkui
    Computer Engineering. 2019, 45(3): 65-72. https://doi.org/10.19678/j.issn.1000-3428.0049676
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Graphics Processing Unit(GPU) clusters are widely used for their high performance characteristics,but with the enlargement of GPU scale,their high power consumption reduce system reliability.Therefore,a GPU cluster power consumption collection system is proposed,and a GPU cluster power collection and monitoring network based on ZigBee Wireless Sensor Network(WSN) is designed.At the same time,a collection communication protocol and a database storage system are constructed.By running the system,communication conflicts can be effectively avoided.Experimental results show that the monitoring network can accurately collect the power consumption of each GPU in the cluster,and the system measurement error and packet loss rate are less than 1% and 0.005% respectively.

  • FENG Xu,HUA Qingyi,FAN Pan,WANG Wenjian
    Computer Engineering. 2019, 45(3): 73-77,90. https://doi.org/10.19678/j.issn.1000-3428.0049889
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to adapt to the current mobile device user interface usability and reusability requirements and colve the insufficency that the mobile device User Interface Description Language (UIDL) does not consider the inherent properties of the mobile device and does not support the large-grained mode components,this paper designs a mobile device user interface description language based on eXtensible Markup Language(XML) which is called PXMUL,and gives an implementation framework of mobile user interface based on pattern component.The attributes needed to describe the interface are defined from three modules of interface layout,logic and environment,and the design and development of user interface are realized on the basis of large-grained components.Example evaluation results show that PXMUL can reduce the learning cost and shorten the development cycle while ensuring the feasibility and effectiveness of interface implementation.

  • CHEN Jiaojiao,ZHU Weiping,TU Mingxuan,TANG Yijie,SUN Zeyu
    Computer Engineering. 2019, 45(3): 78-84. https://doi.org/10.19678/j.issn.1000-3428.0050055
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    According to the characteristics of large density and strong mobility in large shopping malls,in order to identify and predict the dynamic groups in indoor spaces,a method is proposed to identify the groups by spatial-sequence clustering combined with the moving object position and direction features.In the group location prediction,the sequential tree storage structure is proposed in consideration of the incremental updating of the dataset.This structure can obtain the frequent area sequences and the corresponding association rules by scanning the database only once,and can perform single-step and multi-step position predictions.In order to improve the accuracy of group location prediction,a method based on group appearance time and group number is proposed.Experimental verification is carried out in the ATC data set,and results show that when the detection rate of group objects reaches 87.6% by using this method,the accuracy of group identification reaches 90.3%,compared with the algorithms such as LAR and TLAR,the single-step and multi-step position prediction accuracy reach 91.2% and 33.8% respectively.

  • PAN Chengsheng,JIA Yaru,CAI Ruiyan,YANG Li
    Computer Engineering. 2019, 45(3): 85-90. https://doi.org/10.19678/j.issn.1000-3428.0049722
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to solve the problems of prolonged satellite link networking,difficult integration and interoperability between IP technology and ATM technology in space information network,a routing strategy for space information network is proposed based on the satellite Multi-Protocol Label Switching(MPLS) networking scheme.IP packet and ATM cell are encapsulated in a unified MPLS format,which integrates IP and ATM technology.In order to select the transmission path reasonably,a path selection algorithm based on hops and bandwidth utilization is proposed.Results of OPNET simulation platform show that compared with OSPF satellite network routing strategy,this strategy can effectively reduce the transmission delay of the network.

  • PENG Daqin,WANG Fulong,SUN Xiangyue
    Computer Engineering. 2019, 45(3): 91-95. https://doi.org/10.19678/j.issn.1000-3428.0049702
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the energy efficiency of heterogeneous cellular networks,a collaborative optimization algorithm of user association and power control based on effective rate weight is proposed.A two-level iterative algorithm is designed to calculate the user association index and the optimal transmission power.In the case of constant power,the problem of total ratio form is transformed into a polynomial form in outer loop,and the optimal user association index is obtained.The transmission power of base station is allocated by Newton method in inner loop.Experimental results show that compared with MSUA,MSUUA and other algorithms,this algorithm has higher energy efficiency,and can make more users associated with the micro-base station.

  • ZHU Guohui,CHEN Xing
    Computer Engineering. 2019, 45(3): 96-100. https://doi.org/10.19678/j.issn.1000-3428.0049751
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In large-scale Multiple-Input Mutiple-Output(MIMO) system,the increase in the number of antennas and users on the base station side leads to an increase in the dimension of the channel matrix,which increases the computational complexity of the precoding matrix.To solve this problem,a low complexity precoding algorithm is proposed by combining the Truncated Polynomial Expansion(TPE) theory with the Minimum Mean Square Error(MMSE) precoding algorithm.The sum of the J terms in front of the matrix polynomial is approximated as the inverse of the matrix,based on the MMSE precoding,the precoding matrix of the proposed algorithm is deduced,and the expression of the optimal order is solved when the transmitting power is limited.Simulation results show that the proposed algorithm can effectively reduce the computational complexity of precoding when the spectral efficiency is similar to that of MMSE precoding algorithm.

  • QIU Hang,YOU Wei,TANG Hongbo,WANG Chen,NIU Ben
    Computer Engineering. 2019, 45(3): 101-106,112. https://doi.org/10.19678/j.issn.1000-3428.0049780
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In the deployment process of Service Function Chain(SFC),the link reliability is low and the redundant resource overhead is high.To solve this problem,a link backup scheme based on multipath is proposed by using two-stage deployment.In the virtual network function deployment phase,the constraints of node deployment are relaxed and some virtual network functions are deployed on the same physical facility node to reduce the bandwidth resource overhead.In the virtual link deployment phase,multipath mapping based on path partitioning is implemented to ensure link reliability and reduce backup resource overhead.Simulation results show that the proposed scheme has good performance in request acceptance rate and backup bandwidth overhead gain.

  • WANG Zhenchao,BAI Lisha,SONG Boyao
    Computer Engineering. 2019, 45(3): 107-112. https://doi.org/10.19678/j.issn.1000-3428.0051854
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of interference between low power nodes in Ultra-Dense Networks(UDN),a time-domain interference coordination scheme based on K-means clustering algorithm is proposed.According to the distribution characteristics,the nodes are classified,the interference of each class is calculated and the total interference is obtained.On this basis,Almost Blank Subframes(ABS) in time-domain are allocated according to the interference minimization principle to maximize the local throughput of the system.Simulation results show that the proposed scheme is more accurate and has higher network throughput than the interference coordination scheme based on generalized interference model.

  • SHI Zhenyu,LI Linsen
    Computer Engineering. 2019, 45(3): 113-116,124. https://doi.org/10.19678/j.issn.1000-3428.0045025
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The traditional codebook scheme uses Rayleigh fading channel as the model to design codebook,which will lose the gain of Line of Sight(LOS) component in dense small cell channel,and the traditional optimal codeword feedback overhead is large.To solve this problem,a design method of dense small cell codebook based on short-term channel information feedback is proposed.Using the gain of LOS component in small cell,Singular Value Decomposition(SVD) method is used to extend low rank codebook to high order dense small cell codebook.At the same time,a dense small cell codeword feedback method based on short-term channel information is presented.The feedback process is divided into long-term channel codeword feedback and short-term channel codeword feedback to save feedback overhead.Simulation results show that the proposed method can effectively improve the system capacity and reduce the feedback overhead.

  • LU Beini,DU Yugen
    Computer Engineering. 2019, 45(3): 117-124. https://doi.org/10.19678/j.issn.1000-3428.0049479
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Traditional collaborative filtering methods face many problems such as data sparsity,cold start and noise influences when predicting unknown Quality of Service(QoS) values.Therefore a new QoS prediction method based on community discovery is proposed.Users are divided into communities by spectral clustering,Web services are clustered according to their location information,and the improved hybrid collaborative filtering method is used to predict the QoS value.Experimental results show that this method can alleviate the cold start problem of new users,and it has higher prediction accuracy compared with the QoS prediction method based on collaborative filtering.

  • PENG Hao,PENG Min,AN Ning,ZHOU Qingfeng
    Computer Engineering. 2019, 45(3): 125-131,137. https://doi.org/10.19678/j.issn.1000-3428.0049839
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The traditional multi-point location method based on Received Signal Strength Indication(RSSI) has low positioning accuracy and poor stability.To solve this problem,a region discriminant location algorithm based on Bluetooth RSSI is proposed.The RSSI vectors of different regions are constructed after the region discriminant model of RSSI signals is established.The weighted value of RSSI intensity ranging is calculated by Bayesian estimation and analysis.The precise region when RSSI signals are received is discriminated and the RSSI vectors of the region are selected for multi-point centroid location.Experimental results show that the accuracy of the proposed algorithm can reach 90.63%,and the average positioning error can be as low as 0.897 m under the condition of linear fitting with 95% credibility.

  • LONG Zengyan,CHEN Zhigang,XU Chenglin
    Computer Engineering. 2019, 45(3): 132-137. https://doi.org/10.19678/j.issn.1000-3428.0049724
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to recommend friends to users more efficiently in social network,the interaction behavior in micro-blog social network is analyzed.Considering the network structure,user attributes and user interaction characteristics,the possibility of establishing friendship among users is calculated.On this basis,a friend recommendation algorithm for social network based on user interaction is proposed.Experimental results show that the proposed algorithm is more accurate than the algorithm which only considers the network topology or user attributes.

  • ZHAO Qi,ZHAO Huailin,ZHU Bo
    Computer Engineering. 2019, 45(3): 138-141,147. https://doi.org/10.19678/j.issn.1000-3428.0049809
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Time-triggered Ethernet(TTE) based on time synchronization of IEEE 1588 solves the uncertainty problem of data transmission delay without considering the effect of timestamp accuracy on the synchronization accuracy of the IEEE 1588 protocol.To solve this problem,a frequency drift estimation and bias estimation model is established to calculate the impact of time stamp accuracy on the synchronization accuracy of the IEEE 1588 protocol,and the error of time synchronization between TTE protocol and the IEEE 1588 protocol is deduced theoretically.Experimental results show that the TTE network based on IEEE 1588 protocol has higher time synchronization accuracy than the standard TTE time synchronization algorithm when the timestamp accuracy reaches 0.1 μs or higher.

  • FENG Gang,QIN Xizhong,JIA Zhenhong,NIU Hongmei,WANG Zhehui
    Computer Engineering. 2019, 45(3): 142-147. https://doi.org/10.19678/j.issn.1000-3428.0050228
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the energy efficiency of Wireless Powered Communication Network(WPCN),a multiple signal source acquisition model in cognitive radio environment is established.Because the model optimization problem is a non-convex problem,it is transformed into a standard convex optimization problem by using fractional programming principle.On this basis,an optimization iteration algorithm is proposed to maximize the energy efficiency of the system through joint optimization of time allocation and power control.Simulation results show that the proposed model system has higher energy efficiency and faster convergence speed.

  • WU Dapeng,XIAO Bowen,YAN Junjie
    Computer Engineering. 2019, 45(3): 148-154. https://doi.org/10.19678/j.issn.1000-3428.0049648
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In the process of monitoring physiological signals by Wireless Body Area Network(WBAN),the support set of recovered signals is directly used to reduce the effectiveness of network energy.To solve this problem,a data processing mechanism based on partial support set detection is proposed.According to the attenuation characteristics of multi-scale wavelet coefficients of signals,the initial support set is solved by iteration support set detection,and then the correct support set information is separated by intersection operation combined with the restored support set of signals.On this basis,the data is recovered by Orthogonal Multi-Matching Pursuit(OMMP) algorithm.Simulation results show that compared with OMP and OMMP algorithm,this mechanism can improve the data compression rate while guaranteeing the quality of signal recovery.

  • LI Zhixiang,LI Yun,CHU Yanjie
    Computer Engineering. 2019, 45(3): 155-161. https://doi.org/10.19678/j.issn.1000-3428.0050049
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that the Multi-objective Optimization(MOP) algorithm has the convergence and diversity problem of the equilibrium solution in the search,two improved balancing strategies have been proposed by analyzing multi-objective decomposition evolutionary algorithms.Based on the values of the current solution and the parent solution,a breeding operator is designed and compared with the original breeding operator to select the optimal solution.The set of neighbors is adaptively changed according to the execution algebra.On this basis,a multi-objective decomposition evolution algorithm is given.Experimental results verify the effectiveness of the two balancing strategies,and the algorithm is better than the MOEA/D algorithm,NSGAII algorithm and IBEA algorithm.

  • GENG Huantong,ZHOU Lifa,DING Yangyang,ZHOU Shansheng
    Computer Engineering. 2019, 45(3): 162-168. https://doi.org/10.19678/j.issn.1000-3428.0049957
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem of low selection pressure and slow convergence speed for multi-objective evolutionary algorithm based on decomposition,a Local Linear Embedding(LLE) Differential Evolution(DE) algorithm is proposed.According to LLE feature,the spatial dimension of the population target is reduced,and the population bifurcation solution is layered by fast non-dominated sorting,and then the population convergence speed is improved by differential evolution operation.Experimental results show that compared with dMOPSO algorithm,the algorithm has a higher selection pressure and faster convergence speed while ensuring diversity.

  • CHAI Qiang,LI Junhui,KONG Fang,ZHOU Guodong
    Computer Engineering. 2019, 45(3): 169-174. https://doi.org/10.19678/j.issn.1000-3428.0049899
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The semantic analysis of multi-language to semantic expressions is to take multiple semantically equivalent different language sentences as input and parse them into corresponding semantic expressions.Under the framework of neural network coding-decoding,different language inputs is combined for multi-language input to build a dual-encoding-decoding model.Based on the model,two different natural languages are used as the source,and the semantic expression is used as the target to realize the semantic analysis of multi-language to semantic expression.Evaluation results on the semantic analysis dataset with multi-lingual sentences show that the semantic analysis method of multi-language to semantic expression achieves higher accuracy than the semantic analysis method of single-language to semantic expression.

  • ZHANG Qianqian,TIAN Xuedong,YANG Fang,LI Xinfu
    Computer Engineering. 2019, 45(3): 175-181,187. https://doi.org/10.19678/j.issn.1000-3428.0052686
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The query and retrieval results in Mathematical Information Retrieval(MIR) are mainly mathematical expressions,ignoring the semantics of mathematical texts in documents.Therefore,a mathematical expression retrieval model incorporating mathematical text features is proposed.The mathematical text is extracted by traversing Chinese scientific and technical documents.Mathematical dictionaries are used to map mathematical texts into LaTeX mathematical expressions and converted into binary tree structures.On this basis,the mathematical expression index is constructed and the matching algorithm is designed to realize the mathematical text and expression retrieval.Experiments show that the method improves the retrieval performance of the mathematical retrieval system.

  • CAI Yongjia,LI Guanyu
    Computer Engineering. 2019, 45(3): 182-187. https://doi.org/10.19678/j.issn.1000-3428.0049326
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Traditional mass diffusion recommendation algorithm has low diversity and its neglection to user’s social network information and object popularity.Therefore,this paper proposes a mass diffusion recommendation algorithm based on trust mechanism in social network.The trust mechanism is introduced to form an optimal neighbor set for target user.By simulating on the user-object bipartite network,the initial object resources are reallocated according to trust mechanism.The object’s bidirectional diffusion is taken into consideration,and some tunable parameters of object popularity are introduced to achieve resource redistribution,so that the better recommendation results for the target users can be gotten.Experimental results on real-world data sets reveal that the algorithm can enhance the diversity of recommendation results while ensuring higher recommendation accuracy.

  • LU Qiang,LIU Xinqi
    Computer Engineering. 2019, 45(3): 188-196,201. https://doi.org/10.19678/j.issn.1000-3428.0050177
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Connecting and restoring a relatively complete personal trajectory from multiple trajectory databases is important for travel recommendations and mobile navigation.Based on personal trajectory recovery,an integrated learning method for RNN is proposed.By defining a formal model of personal trajectory recovery,each training library is divided into multiple training sub-libraries by using the track point number sampling mode,and an RNN network model is used to describe the splicing degree of the personal trajectory.An integrated learning method is used to construct multiple RNN networks to achieve the goal of restoring personal trajectories.Experimental results show that this method can capture the temporal and spatial continuity of the trajectory and realize the personal trajectory recovery.

  • YANG Ruiqi,ZHANG Yuexia
    Computer Engineering. 2019, 45(3): 197-201. https://doi.org/10.19678/j.issn.1000-3428.0049608
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that the existing majority link prediction algorithm has low accuracy,a link prediction algorithm based on the Normalized Common Neighbor and Local Clustering (NCNLC) similarity index is proposed based on the combination of global and local features.The similarity index of NCNLC attribute of the node is analyzed,and the cumulative influence factor is assigned to the joint between the nodes.Simulation results show that compared with the LAS index similarity algorithm,the proposed algorithm has higher prediction accuracy and can effectively predict the link in the temporal directed social network.

  • SUN Ying,WANG Botao
    Computer Engineering. 2019, 45(3): 202-206. https://doi.org/10.19678/j.issn.1000-3428.0049748
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to solve the problem of low accuracy and slow detection speed of Deformable Parts Model(DPM) at night,an improved method based on DPM is proposed.In the training stage,use the Gamma preprocessing to correct the vehicle samples at night,and train the gradient model of the objects.In the testing phase,a method of saliency region detection based on (R-B) chromatic aberration difference is proposed,which reduces the computation complexity by decreasing the area of the region to be detected.A parameter allocation of adaptive weight is proposed,which assigns large weight values to the important feature parts.Experimental results show that the improved detection method has a precision rate of 95.12%,a recall rate of 91.50%,an average detection time of 48 ms per frame,and it has better real-time performance and robustness.

  • KU Haohua,ZHOU Ping,CAI Xiaodong,YANG Haiyan,LIANG Xiaoxi
    Computer Engineering. 2019, 45(3): 207-211. https://doi.org/10.19678/j.issn.1000-3428.0050802
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In the process of person re-identification,pose variations lead to a problem of misalignment between spatial areas of images,which results in low recognition rate.So this paper proposes a new person re-identification method.Firstly,a convolution neural structure is utilized to calculate responding maps of person images,and the body joints are located according to the extreme points in the responding maps.Secondly,body sub-regions are divided according to the positions of body joints,and they are aligned before further feature extraction.Finally,the features of body sub-regions are fused for identification.By introducing the k-reciprocal nearest neighbor method,more positive samples can be included in the nearest neighbors.With the Jaccard distance,the computation cost is reduced by encoding the k-reciprocal nearest neighbor sets into a vector,which assigns larger weights to closer neighbors.Experimental results show that compared with the feature extraction from the whole person image and using Mahalanobis distance alone,the proposed method can improve the accuracy of person re-identification significantly.

  • FEI Yanjia,LI Fucui,SHAO Feng
    Computer Engineering. 2019, 45(3): 212-216,224. https://doi.org/10.19678/j.issn.1000-3428.0048890
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The method of extracting features based on deep Convolutional Neural Network(CNN) is closer to the visual perception of the human brain than the traditional manual extraction feature method.Therefore,a two-channel combination model is proposed.The combination of the aesthetic information channel and the scene information channel is used to automatically extract the aesthetic information and scene category information in the image,and finally combine the two types of information to form an aesthetic classifier.The training and testing on the AVA library show that compared with the image local feature extraction method,the model structure is simple and has high classification accuracy.

  • TIAN Xuedong,CHAI Yanli,WANG Haibin
    Computer Engineering. 2019, 45(3): 217-224. https://doi.org/10.19678/j.issn.1000-3428.0052619
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Ancient Chinese character always has complex structure,diverse styles and serious character degradation.Traditional elastic grid feature extraction is difficult to achieve ideal effect.Considering the advantages of the hesitant fuzzy theory in multi-feature and multi-attribute decision,an image retrieval method of ancient Chinese character based on the hesitant fuzzy features is proposed.The elastic grid of ancient Chinese character image is divided.Generalizes the evaluation indexes of the influence of the distance,position and length of the surrounding grids on the stroke pixels in the current grid,and calculates the corresponding membership degree.Compare the similarity between the image to be retrieved and the candidate image by the hesitation fuzzy distance measurement,and get the retrieval results.Experimental results show that the proposed method has a relatively good performance in ancient Chinese character images retrieval.

  • CHEN Siyuan,SONG Zhan,YIN Yean
    Computer Engineering. 2019, 45(3): 225-231,236. https://doi.org/10.19678/j.issn.1000-3428.0050289
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to quickly and accurately estimate the 3D surface structure of objects,this paper proposes a 3D reconstruction method based on smartphone.It uses Scale Invariant Feature Transform(SIFT) feature detection method and RANSAC algorithm to solve homography matrix,realizes multiple smartphone images registration with different view angles to the same perspective,and uses the total variation regularization and Energy Minimization(EM) joint estimation method to solve Generalized Bas-relief(GBR) parameters to realize 3D reconstruction of object surface.Experimental result shows that the method can restore the fine texture of the object surface,and the reconstruction accuracy is high.

  • L Xikui,WANG Qisheng,LI Yongfa
    Computer Engineering. 2019, 45(3): 232-236. https://doi.org/10.19678/j.issn.1000-3428.0049215
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to solve the problem of cracks in massive triangular network terrain block rendering,a method of dynamically adding shared boundary points based on resolution of Digital Elevation Model(DEM) is proposed based on terrain features.The boundary points of terrain blocks are obtained quickly by the blind path finding method.The interpolation point spacing is dynamically determined according to the resolution of DEM,and the boundary width and effective length of the adjacent terrain blocks are calculated.Then,according to the interpolation spacing and the effective length of the boundary,the number of shared boundary points needed to be added is calculated,so that the shared boundary points at all levels of resolution are consistent,so that they can meet the requirements of terrain multi-resolution rendering.Test results show that this method can effectively eliminate the cracks in massive terrain rendering of Triangulated Irregular Network(TIN) under the premise of ensuring the accuracy of terrain rendering,and it can also solve the problem of crack elimination in different resolution blocks.

  • LU Tianran,YU Fengqin,CHEN Ying
    Computer Engineering. 2019, 45(3): 237-241,249. https://doi.org/10.19678/j.issn.1000-3428.0050160
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Aiming at the problem that dimensionality disaster easily occurs in the processing of dealing with video data,a dimension reduction method called Linear Sequence Discriminant Analysis(LSDA) is proposed for human action recognition.ViBe algorithm is used to subtract the backgrounds of video frames to get action areas,and dense trajectories are extracted in these areas to suppress the noise caused by camera movements.Fisher Vector is used to encode the features and linear sequence discriminant analysis is conducted on them,the sequence class separability is measured by dynamic time warping distance.In order to reduce the data dimension,a linear discriminative projection of the feature vectors in sequences is mapped to a lower-dimensional subspace by maximizing the between-class separability and minimizing the within-class separability.Support Vector Machine(SVM) is learned from the reduced dimension features,and then get the results of human action recognition.Simulation results on KTH datasets and UCF101 datasets show that compared with Principal Component Analysis(PCA),Linear Discriminant Analysis(LDA) and other dimension reduction methods,the proposed method can effectively improve the recognition accuracy.

  • SUN Ting
    Computer Engineering. 2019, 45(3): 242-249. https://doi.org/10.19678/j.issn.1000-3428.0050381
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    There are some deficiencies while using existing schemes,such as limited application scenes and over segmentation of motion background.An unsupervised video object segmentation algorithm is proposed,which can automatically detect important objects from video sequences.Markov energy,time and space energy,as well as antagonism energy are introduced from the view of the foreground and background probability distribution.Then,the problem of detecting important objects from the background is modeled as a non convex optimization problem based on the mixed energy minimization,and a method based on Alternation Convex Optimization(ACO) is proposed to decompose the problem into two kinds of two quadratic programming problems.In order to make full use of time-domain correlation to improve the reliability of object segmentation,a forward-backward deliver strategy is also adopted.A comprehensive simulation is carried out based on a variety of video datasets.Experimental results show that the performance of the algorithm in this paper is significantly better than the other latest video object segmentation algorithms.

  • WANG Nan,LI Zhi,CHENG Xinyu,CHEN Yi,LUO Hao
    Computer Engineering. 2019, 45(3): 250-255,261. https://doi.org/10.19678/j.issn.1000-3428.0052206
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the information quality and anti-attack ability of video watermarking algorithm,a video dual watermarking algorithm is proposed.Robust watermark is constructed by robustness of selecting block global mean and block mean.Based on Compressive Sensing(CS),a fragile watermarking algorithm is constructed,which can locate the tamper position effectively and restore the tamper appropriately.Experiment results show that the watermarking algorithm has excellent visual quality and robustness.Compared with multifunctional dual watermarking algorithms,the peak signal to noise ratio is improved about 12.8%.For most geometric and signal attacks,the proposed algorithm shows great anti-attack capability comparing with the adaptive video algorithm.Comparing with the traditional transform domain fragile watermarking algorithm,this algorithm applies the fragile watermark to video data,it can enhance the security of the video data.

  • MA Yuxi,TAN Li,DONG Xu,YU Chongchong
    Computer Engineering. 2019, 45(3): 256-261. https://doi.org/10.19678/j.issn.1000-3428.0050353
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to improve the recognition rate and security of the Video Teller Machine(VTM) face recognition login system,an interactive liveness detection algorithm that combines improved blink detection,background detection and random combined action instructions is proposed.Based on the OpenCV cascade classifier face detection and Local Binary Feature(LBF) face alignment algorithm,combining the coordinate proportion and the eye pigment change,the detection method is improved.Uses the background detection and the random combination action instruction to resist the dynamic video attack.Making use of the image quality detection and correction function,the system in weak light,skew and the other environmental condition has a good performance as well.Experiments are carried out on liveness face database CASIA-FASD and self-built sample library,the result shows that the recognition rate reaches 97.67%,which is obviously improved than multispectral,convolutional neural network,and the other existing detection algorithms.

  • LI Jing,SUN Cunwei,XIE Kai,HE Jianbiao
    Computer Engineering. 2019, 45(3): 262-267,272. https://doi.org/10.19678/j.issn.1000-3428.0049975
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    When training Convolutional Neural NetWork(CNN) with small sample voiceprints as training set,the network cannot reach a good convergence state,which results in low recognition rate.So,this paper proposes a new voiceprint recognition method.The proposed method uses deep CNN to extract the rich and latent features of voiceprint,which improves the voiceprint recognition rate.In order to solve the problem that small sample cannot train the CNN,this paper proposes an image increasing algorithm based on the principle of convex lens imaging.At the same time,the Fast Batch Normalization (FBN) is introduced in the convolutional process,which improves the speed of the network convergence and shortens the training time.Select a TIMIT speech database containing voices of 630 speakers for training,validating and testing.Experimental results show that,compared with the GMM,GMM-UBM,and GMM-SVM algorithms,the proposed method improves the recognition rate by 7.3%,2.2%,and 2.8% and compared with the original network,the training time of the FBN-Alexnet network is reduced by 48.2%.It means that it is an effective method for voiceprint recognition of small samples.

  • ZHAN Gen,XIAO Jing,CHEN Yujing,CHEN Jun
    Computer Engineering. 2019, 45(3): 268-272. https://doi.org/10.19678/j.issn.1000-3428.0050413
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    As traditional rate control methods cannot effectively control the segment size in adaptive bit-rate live streaming,the bitrate adaption of video player is affected by the fluctuation of segment size,and then makes inaccurate decisions,resulting in the delay on the client side.To solve this problem,a segment level rate control algorithm is proposed.Bit allocation strategy based on frame type is used inside each segment.The concept of key P-frame is proposed and the distribution of key P-frame is adjusted to optimize the allocation under different video contents.At the same time,a linear prediction model based on SATD and Quantization Parameter(QP) is built,and the QP of each row is adjusted iteratively to control the coding size of the whole frame.Experimental results show that the proposed algorithm can control the bit-rate of segment precisely and ensure the video quality.

  • XU Xintao,CHAI Xiaoli,XIE Bin,SHEN Chen,WANG Jingping
    Computer Engineering. 2019, 45(3): 273-277. https://doi.org/10.19678/j.issn.1000-3428.0051615
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper proposes a Chinese text summarization extraction algorithm,called DK-TextRank,combines Doc2Vec model,K-means and TextRank algorithm for Chinese texts to improve summarization accuracy.After using the Doc2Vec model for text vectorization,the DK-TextRank algorithm uses an improved K-means algorithm for similar text clustering,and the TextRank algorithm with weight impact factors in each cluster to sort and extract topic sentence.Then,it generates a summary.Experimental results show that,compared with traditional TF-IDF,TextRank algorithm,the DK-TextRank algorithm has an F value of 79.36% when the number of summary statements is 7,and the extracted abstract has higher quality.

  • DI Ruitong,WANG Hong,FANG Youli
    Computer Engineering. 2019, 45(3): 278-285,292. https://doi.org/10.19678/j.issn.1000-3428.0050754
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper proposes an improved fake reviews identification method combining time series with multi-scale features.Considering the influence of time factors on the ratings and its distribution,it constructs fake reviews identification model based on multi-dimensional time series to extract abnormal features.It divides abnormal review features into groups,benchmark features and subdivision scale features are extracted according to multi-scale feature idea.To improve the noise immunity of false reviews identification models,it uses a clustering algorithm based on density peaks to identify fake views.Experimental results show that this method has higher identification correct rate of fake reviews and AUC value reach 92% compared with false comment identification method through density peaks clustering based on benchmark scale feature and multi-scale feature.

  • CHEN Siyuan,PENG Chao,CAI Linsen,GUO Lanying
    Computer Engineering. 2019, 45(3): 286-292. https://doi.org/10.19678/j.issn.1000-3428.0050035
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    The Long Short Term Memory(LSTM) network based on attention mechanism generally takes a lot of time during the training process,and only uses sentences as a network input,which is difficult to effectively distinguish the different polarities of different targets in the same sentence.To address this problem,this paper proposes a deep neural model of combining Convolutional Neural Network(CNN) and Regional LSTM(CNN-RLSTM).By segmenting the region according to the specific target through the regional LSTM,the feature information of different targets can be effectively distinguished while retaining the specific emotional information of the specific target,and the emotional information of the entire sentence is retained by the CNN.Experimental results show that, the CNN-RLSTM model can effectively identify the emotional polarity of different targets,and the model training time is shorter than the traditional network model.

  • TAN Mengjie,L Xin,TAO Feifei
    Computer Engineering. 2019, 45(3): 293-299,308. https://doi.org/10.19678/j.issn.1000-3428.0051932
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to help investors find hot spots of investment in a short time,this paper combines the characteristics of the financial news and proposes a financial news topic detection model.The model constructs a time window based on financial news to segment news streams,combines the topic events,feature words,news semantics and financial name entities to extract text features,and applies the Nearest Neighbor-Hierarchical Agglomerative Clustering(NNHAC) algorithm to get the topic clusters.Experimental results show that,compared with tranditional multi-feature topic detection models,this model can effectively reduce the running time of the clustering algorithm,improve the accuracy of topic detection,and to a certain extent,it helps investors to make decision and judgement.

  • ZHOU Jinfeng,YE Shiren,WANG Hui
    Computer Engineering. 2019, 45(3): 300-308. https://doi.org/10.19678/j.issn.1000-3428.0050043
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    This paper proposes a deep Convolutional Neural Network(CNN) model to efficiently extract the local semantic features of different convolutional layer windows for text.The model avoids manually specifying multiple window sizes and retains local semantic features of different windows by stacking a number of convolutional layers.Classification modules are built based on the Global Max Pooling(GMP) layer to calculate the category score for the local semantic features of each window.The model synthesizes these category scores to complete the sentiment classification annotation.Experimental results show that the model has faster text sentiment classification speed than that of other CNN models.

  • YU Jingmin,XIANG Lingyun,ZENG Daojian
    Computer Engineering. 2019, 45(3): 309-314. https://doi.org/10.19678/j.issn.1000-3428.0050407
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    In order to represent the semantic information of the text content for digitization and improve the accuracy of detecting stego texts based on synonym substitution,a novel natural language steganalyisis method is proposed.Word2vec is employed to train a large-scale corpus to obtain multi-dimensional word vectors which contains rich semantic information.Then,it uses the cosine distance between a synonym and its context word vector to measure the correlation between two words,and calculates the fitness of synonyms in a specific context.According to the effect on the context fitness of the synonyms caused by the synonym substitutions in the embedding process,detection features are extracted to form a feature vector,and the Bayesian classification model is employed to train feature vector for the task of steganalysis feature to detect the stego texts.Experimental results show that the proposed method has good detection performance,whose average detection precision and average recall for the stego texts with different embedding rates achieve 97.71% and 92.64%,respectively.

  • ZHANG Yang,LI Xiongfei
    Computer Engineering. 2019, 45(3): 315-320. https://doi.org/10.19678/j.issn.1000-3428.0049795
    Abstract ( ) Download PDF ( )   Knowledge map   Save

    Most of the existing Internet text public opinion analysis models do not consider the influence of time factors,and lack of quantitative description of the dynamic change process of public opinion,so it is difficult to accurately discover the dynamic process and key elements of public opinion evolution.Therefore,this paper proposes an improved quantitative model of public opinions change.Through the linear regression model,the static performance of the opinion in different time windows is obtained.The trend line is used to describe the change situation of the public opinion over time,and the public opinions change index is obtained by combining the overall trend and the change angle of the public opinion.Based on the fluctuated forecast of the domestic A-share stocks,experimental results show that the model has a higher prediction accuracy and stability than the traditional public opinion analysis model.