Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Highlights

Please wait a minute...
  • Select all
    |
  • Research Hotspots and Reviews
    ZHAO Kai, HU Yuhuan, YAN Junqiao, BI Xuehua, ZHANG Linlin
    Computer Engineering. 2025, 51(8): 1-15. https://doi.org/10.19678/j.issn.1000-3428.0069147

    Blockchain, as a distributed and trusted database, has gained significant attention in academic and industrial circles for its effective application in the domain of digital copyright protection. Traditional digital copyright protection technologies suffer from issues such as difficulties in tracking infringements, complexities in copyright transactions, and inadequate protection of legitimate rights, which severely hampering the development of digital copyright protection endeavors. The immutability, traceability, and decentralization inherent in blockchain technology provide a highly reliable, transparent, and secure solution to mitigate the risks associated with digital copyright infringement. This overview starts with an introduction to the fundamental principles of blockchain technology. Then, it discusses the latest research findings on the integration of blockchain with traditional copyright protection technologies to address the problems inherent in traditional copyright protection schemes. Further, an evaluation of the practical applications and potential of blockchain is conducted, emphasizing its positive impact on the copyright protection ecosystem. Finally, this overview delves into the challenges and future trends related to blockchain based copyright protection, ultimately aiming to establish a more robust and sustainable blockchain copyright protection system.

  • Artificial Intelligence and Pattern Recognition
    WANG Shuai, SHI Yancui
    Computer Engineering. 2025, 51(8): 190-202. https://doi.org/10.19678/j.issn.1000-3428.0069636

    The sequence recommendation algorithm dynamically models the user's historical behavior to predict the content they may be interested in. This study focuses on the application of contrastive Self Supervised Learning (SSL) in sequence recommendation, enhancing the model's representation ability in sparse data scenarios by designing effective self supervised signals. First, a personalized data augmentation method incorporating user preferences is proposed to address the issue of noise introduced by random data augmentation. This method guides the augmentation process based on user ratings and combines different augmentation methods for short and long sequences to generate augmented sequences that align with user preferences. Second, a mixed-augmentation training approach is designed to address the issue of imbalanced feature learning during training. In the early stages of training, augmentation sequences are generated using randomly selected methods to enhance the model performance and generalization. In the later stages, augmentation sequences with high similarity to the original sequences are selected to enable the model to comprehensively learn the actual preferences and behavior patterns of users. Finally, traditional sequence prediction objectives are combined with SSL objectives to infer user representations. Experimental verification is performed using the Beauty, Toys, and Sports datasets. Compared with the best result in the baseline model, the HR@5 indicator of the proposed method increases by 6.61%, 3.11%, and 3.76%, and the NDCG@5 indicator increases by 11.40%, 3.50%, and 2.16%, respectively, for the aforementioned datasets. These experimental results confirm the rationality and validity of the proposed method.

  • Development Research and Engineering Application
    GAO Qingxin, LIU Cong, ZHANG Zaigui, GUO Na, SU Xuan, ZENG Qingtian
    Computer Engineering. 2025, 51(8): 396-405. https://doi.org/10.19678/j.issn.1000-3428.0069301

    As a pivotal technology driving organization digital transformation, Robotic Process Automation (RPA) has garnered significant attention from both the academic and industrial sectors in recent years. However, current deployment strategies suffer from a lack of process analysis, leading to misguided deployment of RPA robots and resource wastage. Furthermore, existing RPA robots deployment methods based on process mining depend overly on domain-specific expertise, limiting their generality. To address these challenges, this study proposes the integration of process mining with RPA robots and presents a deployment method for RPA robots based on process mining. The method is initiated by introducing an approach to mine the global process model from event logs and extracting a Time Petri net containing temporal information. Subsequently, critical process paths are identified using a method designed to recognize key process paths. Finally, an optimization deployment strategy for RPA robots is introduced, which determines the optimal deployment node set considering the time and cost constraints. The proposed method is implemented using ProM, an open-source process mining tool platform. It is compared with four deployment methods in experiments that focus on improving time efficiency. The experimental results indicate that, compared to other deployment methods, this approach results in a time efficiency improvement ranging from 22% to 41%, and the deployment accuracy reaches 1, without relying on domain-specific expert knowledge, validating its generality and accuracy.

  • Graphics and Image Processing
    HAO Hongda, LUO Jianxu
    Computer Engineering. 2025, 51(8): 270-280. https://doi.org/10.19678/j.issn.1000-3428.0069269

    Deep learning has been widely applied to medical imaging. A medical image segmentation model based on an attention mechanism is one of the main methods used in current research. For the multi-organ segmentation task, most existing 2D segmentation models mainly focus on the overall segmentation effect of slices, while ignoring the loss or under-segmentation of small object feature information in slices, which limits the model′s segmentation performance. To solve this problem, this study proposes a multi-organ semantic segmentation model, DASC-Net, based on multi-scale feature fusion and an improved attention mechanism. The overall framework of the DASC-Net is based on an encoder-decoder architecture. The encoder uses the ResNet 50 network and sets a skip connection with the decoder. The attention mechanism is realized using the parallel structure of a Dual Attention Module (DAM) and a Small Object Capture (SOC) module to perform multi-scale regional feature fusion. DASC-Net not only perceives the feature information of larger objects but also retains the feature information of small objects through attention weight reconstruction, which effectively addresses the limitations of the attention module and further improves the segmentation performance of the model. The experimental results on the CHAOS dataset show that DASC-Net can obtain 83.72%, 75.79%, 87.75%, 85.63% and 77.60% on the Sensitivity, Jaccard similarity coefficient, Positivity Predictive Value (PPV), Dice similarity coefficient, and mean Intersection over Union (mIoU) indicators, respectively; the Dice similarity coefficient and 95% Hausdorff Distance (HD95) values on the Synapse dataset are 82.44% and 21.25 mm, respectively. DASC-Net performs better than the other segmentation networks on both datasets, which demonstrates its reliable and accurate segmentation performance.

  • Graphics and Image Processing
    MIAO Ru, LI Yi, ZHOU Ke, ZHANG Yanna, CHANG Ranran, MENG Geng
    Computer Engineering. 2025, 51(8): 292-304. https://doi.org/10.19678/j.issn.1000-3428.0068856

    The complex backgrounds, diverse target types, and significant scale variations in remote sensing images lead to target omission and false detection. To address these issues, this study proposes an improved Faster R-CNN multi-object detection model. First, the ResNet 50 backbone network is replaced with the Swin Transformer to enhance the model's feature extraction capability. Second, a Balanced Feature Pyramid (BFP) module is introduced to fuse shallow and deep semantic information, further strengthening the feature fusion effect. Finally, in the classification and regression branches, a dynamic weighting mechanism is incorporated to encourage the network to focus more on high-quality candidate boxes during training, thereby improving the precision of target localization and classification. The experimental results on the RSOD dataset show that the proposed model significantly reduces the number of Floating-Point Operations per second (FLOPs) compared to the Faster R-CNN model. The proposed model achieves 10.7 percentage points improvement in mAP@0.5 ∶0.95 and 10.6 percentage points increase in Average Recall (AR). Compared to other mainstream detection models, the proposed model achieves higher accuracy while reducing the false detection rate. These results indicate that the proposed model significantly enhances detection accuracy in remote sensing images with complex backgrounds.

  • Research Hotspots and Reviews
    ZHANG Jin, CHEN Zhu, CHEN Zhaoyun, SHI Yang, CHEN Guanjun
    Computer Engineering. 2025, 51(7): 1-11. https://doi.org/10.19678/j.issn.1000-3428.0068870

    Simulators play an indispensable role in an array of scientific fields involving research and development. Particularly in architectural design, simulators provide a secure and cost-effective virtual environment, enabling researchers to conduct rapid experimental analyses and evaluations. Simultaneously, simulators facilitate the acceleration of the chip design and verification processes, thereby conserving time and reducing resource expenditure. However, with the evolutionary advances in processor architectural designs—specifically, the flourishing diversifications featured in dedicated processors—the key role played by simulators in providing substantial feedback for architectural design exploration has gained prominence. This discourse provides an overview of the current developments and applications of architectural simulators, accentuating a few illustrative examples. Analyzing the techniques employed by simulators dedicated to various processors allows for a deeper understanding of the focal points and technical complexities under different architectures. Moreover, this discourse deliberates speculative assessments and critiques of vital aspects of future architectural simulator developments, aspiring to forecast their prospects in the field of processor design research.

  • Artificial Intelligence and Pattern Recognition
    PENG Juhong, ZHANG Chi, GAO Qian, ZHANG Guangming, TAN Donghua, ZHAO Mingjun
    Computer Engineering. 2025, 51(7): 152-160. https://doi.org/10.19678/j.issn.1000-3428.0069283

    Steel surface defect detection technology in industrial scenarios is hindered by low detection accuracy and slow convergence speed. To address these issues, this study presents an improved YOLOv8 algorithm, namely a YOLOv8n-MDC. First, a Multi-scale Cross-fusion Network (MCN) is added to the backbone network. Establishing closer connections between the feature layers promotes uniform information transmission and reduces semantic information loss during cross-layer feature fusion, thereby enhancing the ability of the model to perceive steel defects. Second, deformable convolution is introduced in the module to adaptively change the shape and position of the convolution kernel, enabling a more flexible capture of the edge features of irregular defects, reducing information loss, and improving detection accuracy. Finally, a Coordinate Attention (CA) mechanism is added to embed position information into channel attention, solving the problem of position information loss and enabling the model to perceive the position and morphological features of defects, thereby enhancing detection precision and stability. Experimental results on the NEU-DET dataset show that the YOLOv8n-MDC algorithm achieves mAP@0.5 of 81.0%, which is 4.2 percentage points higher than that of the original baseline network. The algorithm has a faster convergence speed and higher accuracy; therefore, it meets the requirements of practical industrial production.

  • Graphics and Image Processing
    LIU Chunxia, MENG Jixing, PAN Lihu, GONG Dali
    Computer Engineering. 2025, 51(7): 326-338. https://doi.org/10.19678/j.issn.1000-3428.0069510

    A multimodal remote sensing small-target detection method, BFMYOLO, is proposed to address misdetection and omission issues in remote sensing images with complex backgrounds and less effective information. The method utilizes a pixel-level Red-Green-Blue (RGB) and infrared (IR) image fusion module, namely, the Bimodal Fusion Module (BFM), for effectively making full use of the complementarity of different modes to realize the effective fusion of information from two modalities. In addition, a full-scale adaptive updating module, AA, is introduced to resolve multitarget information conflicts during feature fusion. This module incorporates the CARAFE up-sampling operator and shallow features to enhance non-neighboring layer fusion and improve the spatial information of small targets. An Improved task decoupling Detection Head (IDHead) is designed to handle classification and regression tasks separately, thereby reducing the mutual interference between different tasks and enhancing detection performance by fusing deeper semantic features. The proposed method adopts the Normalized Wasserstein Distance (NWD) loss function as the localization regression loss function to mitigate positional bias sensitivity. Results of experiments on the VEDAI, NWPU VHR-10, and DIOR datasets demonstrate the superior performance of the model, with mean Average Precision when the threshold is set to 0.5 (mAP@0.5) of 78.6%, 95.5%, and 73.3%, respectively. The model thus outperforms other advanced models in remote sensing small-target detection.

  • Graphics and Image Processing
    SHA Yuyang, LU Jingtao, DU Haofan, ZHAI Xiaobing, MENG Weiyu, LIAN Xu, LUO Gang, LI Kefeng
    Computer Engineering. 2025, 51(7): 314-325. https://doi.org/10.19678/j.issn.1000-3428.0068674

    Image segmentation is a crucial technology for environmental perception, and it is widely used in various scenarios such as autonomous driving and virtual reality. With the rapid development of technology, computer vision-based blind guiding systems are attracting increasing attention as they outperform traditional solutions in terms of accuracy and stability. The semantic segmentation of road images is an essential feature of a visual guiding system. By analyzing the output of algorithms, the guiding system can understand the current environment and aid blind people in safe navigation, which helps them avoid obstacles, move efficiently, and get the optimal moving path. Visual blind guiding systems are often used in complex environments, which require high running efficiency and segmentation accuracy. However, commonly used high-precision semantic segmentation algorithms are unsuitable for use in blind guiding systems owing to their low running speed and a large number of model parameters. To solve this problem, this paper proposes a lightweight road image segmentation algorithm based on multiscale features. Unlike existing methods, the proposed model contains two feature extraction branches, namely, the Detail Branch and Semantic Branch. The Detail Branch extracts low-level detail information from the image, while the Semantic Branch extracts high-level semantic information. Multiscale features from the two branches are processed and used by the designed feature mapping module, which can further improve the feature modeling performance. Subsequently, a simple and efficient feature fusion module is designed for the fusion of features with different scales to enhance the ability of the model in terms of encoding contextual information by fusing multiscale features. A large amount of road segmentation data suitable for blind guiding scenarios are collected and labeled, and a corresponding dataset is generated. The model is trained and tested on the dataset. The experimental results show that the mean Intersection over Union (mIoU) of the proposed method is 96.5%, which is better than that of existing image segmentation models. The proposed model can achieve a running speed of 201 frames per second on NVIDIA GTX 3090Ti, which is higher than that of existing lightweight image segmentation models. The model can be deployed on NVIDIA AGX Xavier to obtain a running speed of 53 frames per second, which can meet the requirements for practical applications.

  • Artificial Intelligence and Pattern Recognition
    SONG Jie, XU Huiying, ZHU Xinzhong, HUANG Xiao, CHEN Chen, WANG Zeyu
    Computer Engineering. 2025, 51(7): 127-139. https://doi.org/10.19678/j.issn.1000-3428.0069257

    Existing object detection algorithms suffer from low detection accuracy and poor real-time performance when detecting fall events in indoor scenes, owing to changes in angle and light. In response to this challenge, this study proposes an improved fall detection algorithm based on YOLOv8, called OEF-YOLO. The C2f module in YOLOv8 is improved by using a Omni-dimensional Dynamic Convolution (ODConv) module, optimizing the four dimensions of the kernel space to enhance feature extraction capabilities and effectively reduce computational burden. Simultaneously, to capture finer grained features, the Efficient Multi-scale Attention (EMA) module is introduced into the neck network to further aggregate pixel-level features and improve the network's processing ability in fall scenes. Integrating the Focal Loss idea into the Complete Intersection over Union (CIoU) loss function allows the model to pay more attention to difficult-to-classify samples and optimize overall model performance. Experimental results show that compared to YOLOv8n, OEF-YOLO achieves improvements of 1.5 and 1.4 percentage points in terms of mAP@0.5 and mAP@0.5∶0.95, the parameters and computational complexity are 3.1×106 and 6.5 GFLOPs. Frames Per Second (FPS) increases by 44 on a Graphic Processing Unit (GPU), achieving high-precision detection of fall events while also meeting deployment requirements in low computing scenarios.

  • Research Hotspots and Reviews
    PANG Xin, GE Fengpei, LI Yanling
    Computer Engineering. 2025, 51(6): 1-19. https://doi.org/10.19678/j.issn.1000-3428.0069005

    Acoustic Scene Classification (ASC) aims to enable computers to simulate the human auditory system in the task of recognizing various acoustic environments, which is a challenging task in the field of computer audition. With rapid advancements in intelligent audio processing technologies and neural network learning algorithms, a series of new algorithms and technologies for ASC have emerged in recent years. To comprehensively present the technological development trajectory and evolution in this field, this review systematically examines both early work and recent developments in ASC, providing a thorough overview of the field. This review first describes application scenarios and the challenges encountered in ASC and then details the mainstream frameworks in ASC, with a focus on the application of deep learning algorithms in this domain. Subsequently, it systematically summarizes frontier explorations, extension tasks, and publicly available datasets in ASC and finally discusses the prospects for future development trends in ASC.

  • Cyberspace Security
    YAO Yupeng, WEI Lifei, ZHANG Lei
    Computer Engineering. 2025, 51(6): 223-235. https://doi.org/10.19678/j.issn.1000-3428.0069133

    Federated learning enables participants to collaboratively model without revealing their raw data, thereby effectively addressing the privacy issue of distributed data. However, as research advances, federated learning continues to face security concerns such as privacy inference attacks and malicious client poisoning attacks. Existing improvements to federated learning mainly focus on either privacy protection or against poisoning attacks without simultaneously addressing both types of attacks. To address both inference and poisoning attacks in federated learning, a privacy-preserving against poisoning federated learning scheme called APFL is proposed. This scheme involves the design of a model detection algorithm that utilizes Differential Privacy (DP) techniques to assign corresponding aggregation weights to each client based on the cosine similarity between the models. Homomorphic encryption techniques are employed for the weighted aggregation of the local models. Experimental evaluations of the MNIST and CIFAR10 datasets demonstrate that APFL effectively filters malicious models and defends against poisoning attacks while ensuring data privacy. When the poisoning ratio is no more than 50%, APFL achieves a model performance consistent with the Federated Averaging (FedAvg) scheme in a non-poisoned environment. Compared with the Krum and FLTrust schemes, APFL exhibits average reductions of 19% and 9% in model test error rate, respectively.

  • Research Hotspots and Reviews
    QIN Yongwang, ZHANG Yang, HU Xing, LIU Sheng, LI Shaoqing
    Computer Engineering. 2025, 51(6): 29-37. https://doi.org/10.19678/j.issn.1000-3428.0068882

    With the rapid increase in the complexity of integrated circuit design, a trend of globalization and division of labor has emerged, necessitating the involvement of an increasing number of third-party Intellectual Property (IP) core providers. The widespread use of third-party IP cores introduces risks of hardware trojans. To detect and evaluate the presence of hardware trojans and their potential functionalities in third-party IP cores, there is an urgent need to explore feasible hardware security evaluation methods for IP cores. The functional identification of digital circuit modules has garnered significant attention as a fundamental research area in hardware trojan analysis. In this study, the task of circuit function detection is transformed into a multiclassification problem. By leveraging the characteristics of the circuit and graph data structures, a gate-level circuit function classification and detection method based on Graph Attention Networks (GAT) is proposed. First, to address the lack of functional identification datasets for gate-level netlists, a representative set of Register Transfer Level (RTL) codes is collected and synthesized to generate gate-level netlists, thereby constructing a gate-level circuit dataset of appropriate scale and diversity. Subsequently, to extract and process the circuit feature information, a software tool based on text recognition is developed. This tool maps the complex interconnections of circuits into a structured and concise JSON(JavaScript Object Notation) format, thereby facilitating neural network processing. Finally, a graph attention neural network is employed to train a multiclassifier using the constructed gate-level netlist dataset. After training, the multiclassifier becomes capable of classifying and identifying unknown gate-level circuits. The experimental results demonstrate that the classifier, after learning from more than 3 000 netlists in the self-built dataset, achieves a classification accuracy of 90% for 645 netlists across six categories.

  • Cyberspace Security
    CAO Bei, ZHAO Kui
    Computer Engineering. 2025, 51(6): 193-203. https://doi.org/10.19678/j.issn.1000-3428.0070158

    The accurate recognition of fake news is an important research topic in the online environment, where distinguishing information explosion and authenticity is difficult. Existing studies mostly use multiple deep learning models to extract multivariate semantic features to capture different levels of semantic information in the text; however, the simple splicing of these features causes information redundancy and noise, limiting detection accuracy and generalization, and effective deep fusion methods are not available. In addition, existing studies tend to ignore the impact of dual sentiments co-constructed by news content and its corresponding comments on revealing news authenticity. This paper proposes a Dual Emotion and Multi-feature Fusion based Fake News Detection (DEMF-FND) model to address these problems. First, the emotional features of news and comments are extracted by emotion analysis. The emotional difference features reflecting the correlation between the two are introduced using similarity computation, and a dual emotion feature set is constructed. Subsequently, a fusion mechanism based on multihead attention is used to deeply fuse the global and local semantic features of the news text captured by a Bidirectional Long Short-Term Memory (BiLSTM) network with a designed Integrated Static-Dynamic Embedded Convolutional Neural Network (ISDE-CNN). Eventually, the dual emotion feature set is concatenated with the semantic features obtained by deep fusion and fed into a classification layer consisting of a fully connected layer, to determine news authenticity. Experimental results show that the proposed method outperforms the baseline method in terms of benchmark metrics on three real datasets, namely Weibo20, Twitter15, and Twitter16, and achieves 2.5, 2.3, and 5.5 percentage points improvements in accuracy, respectively, highlighting the importance of dual emotion and the deep fusion of semantic features in enhancing the performance of fake news detection.

  • Research Hotspots and Reviews
    LIU Kai, REN Hongyi, LI Ying, JI Yi, LIU Chunping
    Computer Engineering. 2025, 51(6): 49-56. https://doi.org/10.19678/j.issn.1000-3428.0068910

    Medical Visual Question Answering (Med-VQA) requires an understanding of content related to both medical images and text-based questions. Therefore, designing effective modal representations and cross-modal fusion methods is crucial for performing well in Med-VQA tasks. Currently, Med-VQA methods focus only on the global features of medical images and the distribution of attention within a single modality, ignoring medical information in the local features of images and cross-modal interactions, thereby limiting the understanding of image content. This study proposes the Cross-Modal Attention-Guided Medical VQA (CMAG-MVQA) model. First, based on U-Net encoding, this method effectively enhances the local features of an image. Second, from the perspective of cross-modal collaboration, a selection guided attention method is proposed to introduce interactive information from other modalities. In addition, a self-attention mechanism is used to further enhance the image representation obtained by selective guided attention acquisition. Ablation and comparative experiments on the VQA-RAD medical question-answering dataset show that the proposed method performs well in Med-VQA tasks and improves feature representation performance compared to similar methods.

  • Space-Air-Ground Integrated Computing Power Networks
    LI Bin, SHAN Huimin
    Computer Engineering. 2025, 51(5): 1-8. https://doi.org/10.19678/j.issn.1000-3428.0069423

    To address the challenges of insufficient computing capacity of end users and the unbalanced distribution of computing power among edge nodes in computing power networks, this study proposes an Unmanned Aerial Vehicle (UAV)-assisted Device-to-Device (D2D) edge computing solution based on incentive mechanisms. First, under constraints involving computing resources, transmission power, and the unit pricing of computing resources, a unified optimization problem is formulated to maximize system revenue. This problem aims to optimize the task offloading ratio, computing resource constraints, UAV trajectory, as well as the transmission power and unit pricing of computing resources for both UAVs and users. The Proximal Policy Optimization (PPO) algorithm is employed to establish user offloading and purchasing strategies. In addition, an iterative strategy is implemented at each time step to solve the optimization problem and obtain the optimal solution. The simulation results demonstrate that the PPO-based system revenue maximization algorithm exhibits superior convergence and improves overall system revenue compared to the baseline algorithm.

  • Artificial Intelligence and Pattern Recognition
    HUANG Kun, QI Zhaojian, WANG Juanmin, HU Qian, HU Weichao, PI Jianyong
    Computer Engineering. 2025, 51(5): 133-142. https://doi.org/10.19678/j.issn.1000-3428.0069026

    Pedestrian detection in crowded scenes is a key technology in intelligent monitoring of public space. It enables the intelligent monitoring of crowds, using object detection methods to detect the positions and number of pedestrians in videos. This paper presents Crowd-YOLOv8, an improved version of the YOLOv8 detection model, to address the issue of pedestrians being easily missed owing to occlusion and small target size in densely populated areas. First, nostride-Conv-SPD is introduced into the backbone network to enhance its capability of extracting fine-grained information, such as small object features in images. Second, small object detection heads and the CARAFE upsampling operator are introduced into the neck part of the YOLOv8 network to fuse features at different scales and improve the detection performance in the case of small targets. Experimental results demonstrate that the proposed method achieves an mAP@0.5 of 84.3% and an mAP@0.5∶0.95 of 58.2% on a CrowdedHuman dataset, which is an improvement of 3.7 and 5.2 percentage points, respectively, compared to those of the original YOLOv8n. On the WiderPerson dataset, the proposed method achieves an mAP@0.5 of 88.4% and an mAP@0.5∶0.95 of 67.4%, which is an improvement of 1.1 and 1.5 percentage points compared to those of the original YOLOv8n.

  • Development Research and Engineering Application
    ZHOU Siyu, XU Huiying, ZHU Xinzhong, HUANG Xiao, SHENG Ke, CAO Yuqi, CHEN Chen
    Computer Engineering. 2025, 51(5): 326-339. https://doi.org/10.19678/j.issn.1000-3428.0069259

    As the main window of human-computer interaction, the mobile phone screen has become an important factor affecting the user experience and the overall performance of the terminal. As a result, there is a growing demand to address defects in mobile phone screens. To meet this demand, in view of the low detection accuracy, high missed detection rate of small target defects, and slow detection speed in the process of defect detection on mobile phone screens, a PGS-YOLO algorithm is proposed, with YOLOv8n as the benchmark model. PGS-YOLO effectively improves the detection ability of small targets by adding a special small target detection head and combining it with the SeaAttention attention module. The backbone and feature fusion networks are integrated into PConv and GhostNetV2 lightweight modules, respectively, to ensure accuracy, reduce the number of model parameters, and improve the speed and efficiency of defect detection. The experimental results show that, in the dataset of mobile phone screen surface defects from Peking University, compared with the results of YOLOv8n, the mAP@0.5 and mAP@0.5∶0.95 of the PGS-YOLO algorithm are increased by 2.5 and 2.2 percentage points, respectively. The algorithm can accurately detect large defects in the process of mobile phone screen defect detection as well as maintain a certain degree of accuracy for small defects. In addition, the detection performance is better than that of most YOLO series algorithms, such as YOLOv5n and YOLOv8s. Simultaneously, the number of parameters is only 2.0×106, which is smaller than that of YOLOv8n, meeting the needs of industrial scenarios for mobile phone screen defect detection.

  • Development Research and Engineering Application
    CHEN Ziyan, WANG Xiaolong, HE Di, AN Guocheng
    Computer Engineering. 2025, 51(5): 314-325. https://doi.org/10.19678/j.issn.1000-3428.0069122

    The current high-precision vehicle detection model faces challenges due to its excessive parameterization and computational demands, making it unsuitable for efficient operation on intelligent transportation devices. Conversely, lightweight vehicle detection models often sacrifice accuracy, rendering them unsuitable for practical tasks. In response, an improved lightweight vehicle detection network based on YOLOv8 is proposed. This enhancement involves substituting the main network with the FasterNet architecture, which reduces the computational and memory access requirements. Additionally, we replace the Bidirectional Feature Pyramid Network (BiFPN) in the neck with a weighted bidirectional feature pyramid network to simplify the feature fusion process. Simultaneously, we introduce a dynamic detection head with a fusion attention mechanism to achieve nonredundant integration of the detection head and attention. Furthermore, we address the deficiencies of the Complete Intersection over Union (CIoU) in terms of detection accuracy and convergence speed by proposing a regression loss algorithm that incorporates the Scale-invariant Intersection over Union (SIoU) combined with the Normalized Gaussian Wasserstein Distance (NWD). Finally, to minimize the computational demands on edge devices, we implement amplitude-based layer-wise adaptive sparsity pruning, which further compresses the model size. Experimental results demonstrate that the proposed improved model, compared with the original YOLOv8s model, achieves a 1.5 percentage points increase in accuracy, a 78.9% reduction in parameter count, a 67.4% decrease in computational demands, and a 77.8% reduction in model size. This demonstrates the outstanding lightweight effectiveness and practical utility of the proposed model.

  • Space-Air-Ground Integrated Computing Power Networks
    WANG Kewen, ZHANG Weiting, SUN Tong
    Computer Engineering. 2025, 51(5): 52-61. https://doi.org/10.19678/j.issn.1000-3428.0069471

    In response to the increasing demand for fast response and large-scale coverage in application scenarios such as satellite data processing and vehicle remote control, this study focuses on utilizing hierarchical control and artificial intelligence technology to design a resource scheduling mechanism for space-air-ground integrated computing power networks. Air, space, and ground networks are divided into three domains, and domain controllers are deployed for resource management in the corresponding local domain. The areas are divided based on the coverage of satellites and drones to ensure that they can achieve effective service guarantees, efficient data transmission, and task processing. A multi-agent reinforcement learning-based scheduling algorithm is proposed to optimize resource utilization in space-air-ground integrated computing power networks, considering each domain controller is treated as an agent with task scheduling and resource allocation capabilities. Intelligent resource scheduling and efficient resource allocation for computing tasks are realized through collaborative learning and distributed decision-making with satisfactory delay and energy consumption constraints. Computing tasks are generated in different scenarios and processed in real time. Simulation results show that the proposed mechanism can effectively improve resource utilization and shorten task response time.

  • 40th Anniversary Celebration of Shanghai Computer Society
    QI Fenglin, SHEN Jiajie, WANG Maoyi, ZHANG Kai, WANG Xin
    Computer Engineering. 2025, 51(4): 1-14. https://doi.org/10.19678/j.issn.1000-3428.0070222

    The rapid development of Artificial Intelligence (AI) has empowered numerous fields and significantly impacted society, establishing a solid technological foundation for university informatization services. This study explores the historical development of both AI and university informatization by analyzing their respective trajectories and interconnections. Although universities worldwide may focus on different aspects of AI in their digital transformation efforts, they universally demonstrate vast potential of AI in enhancing education quality and streamlining management processes. Thus, this study focuses on five core areas: teaching, learning, administration, assessment, and examination. It comprehensively summarizes typical AI-empowered application cases to demonstrate how AI effectively improves educational quality and management efficiency. In addition, this study highlights the potential challenges associated with AI applications in university informatization, such as data privacy protection, algorithmic bias, and technology dependence. Furthermore, common strategies for addressing these issues such as enhancing data security, optimizing algorithm transparency and fairness, and fostering digital literacy among both teachers and students are elaborated upon in this study. Based on these analyses, the study explores future research directions for AI in university informatization, emphasizing the balance technological innovation and ethical standards. It advocates for the establishment of interdisciplinary collaboration mechanisms to promote the healthy and sustainable development of AI in the field of university informatization.

  • Artificial Intelligence and Pattern Recognition
    ZHOU Hanqi, FANG Dongxu, ZHANG Ningbo, SUN Wensheng
    Computer Engineering. 2025, 51(4): 57-65. https://doi.org/10.19678/j.issn.1000-3428.0069100

    Unmanned Aerial Vehicle (UAV) Multi-Object Tracking (MOT) technology is widely used in various fields such as traffic operation, safety monitoring, and water area inspection. However, existing MOT algorithms are primarily designed for single-UAV MOT scenarios. The perspective of a single-UAV typically has certain limitations, which can lead to tracking failures when objects are occluded, thereby causing ID switching. To address this issue, this paper proposes a Multi-UAV Multi-Object Tracking (MUMTTrack) algorithm. The MUMTTrack network adopts an MOT paradigm based on Tracking By Detection (TBD), utilizing multiple UAVs to track objects simultaneously and compensating for the perspective limitations of a single-UAV. Additionally, to effectively integrate the tracking results from multiple UAVs, an ID assignment strategy and an image matching strategy are designed based on the Speeded Up Robust Feature (SURF) algorithm for MUMTTrack. Finally, the performance of MUMTTrack is compared with that of existing widely used single-UAV MOT algorithms on the MDMT dataset. According to the comparative analysis, MUMTTrack demonstrates significant advantages in terms of MOT performance metrics, such as the Identity F1 (IDF1) value and Multi-Object Tracking Accuracy (MOTA).

  • Development Research and Engineering Application
    TANG Jingwen, LAI Huicheng, WANG Tongguan
    Computer Engineering. 2025, 51(4): 303-313. https://doi.org/10.19678/j.issn.1000-3428.0068897

    Pedestrian detection in intelligent community scenarios needs to accurately recognize pedestrians to address various situations. However, for persons who are occluded or at long distances, existing detectors exhibit problems such as missed detection, detection error, and large models. To address these problems, this paper proposes a pedestrian detection algorithm, Multiscale Efficient-YOLO (ME-YOLO), based on YOLOv8. An efficient feature Extraction Module (EM) is designed to improve network learning and capture pedestrian features, which reduces the number of network parameters and improves detection accuracy. The reconstructed detection head module reintegrates the detection layer to enhance the network's ability to recognize small targets and effectively detect small target pedestrians. A Bidirectional Feature Pyramid Network (BiFPN) is introduced to design a new neck network, namely the Bidirectional Dilated Residual-Feature Pyramid Network (BDR-FPN), and the expanded residual module and weighted attention mechanism expand the receptive field and learn pedestrian features with emphasis, thereby alleviating the problem of network insensitivity to occluded pedestrians. Compared with the original YOLOv8 algorithm, ME-YOLO increases the AP50 by 5.6 percentage points, reduces the number of model parameters by 41%, and compresses the model size by 40% after training and verification based on the CityPersons dataset. ME-YOLO also increases the AP50 by 4.1 percentage points and AP50∶95 by 1.7 percentage points on the TinyPerson dataset. Moreover, the algorithm significantly reduces the number of model parameters and model size and effectively improves detection accuracy. This method has a considerable application value in intelligent community scenarios.

  • Development Research and Engineering Application
    ZHANG Boqiang, CHEN Xinming, FENG Tianpei, WU Lan, LIU Ningning, SUN Peng
    Computer Engineering. 2025, 51(4): 373-382. https://doi.org/10.19678/j.issn.1000-3428.0068338

    This paper proposes a path-planning method based on hybrid A* and modified RS curve fusion to address the issue of unmanned transfer vehicles in limited scenarios being unable to maintain a safe distance from surrounding obstacles during path planning, resulting in collisions between vehicles and obstacles. First, a distance cost function based on the KD Tree algorithm is proposed and added to the cost function of the hybrid A* algorithm. Second, the expansion strategy of the hybrid A* algorithm is changed by dynamically changing the node expansion distance based on the surrounding environment of the vehicle, achieving dynamic node expansion and improving the algorithm's node search efficiency. Finally, the RS curve generation mechanism of the hybrid A* algorithm is improved to make the straight part of the generated RS curve parallel to the boundary of the surrounding obstacles to meet the requirements of road driving in the plant area. Subsequently, the local path is smoothed to ensure that it meets the continuity of path curvature changes under the conditions of vehicle kinematics constraints to improve the quality of the generated path. The experimental results show that, compared with traditional algorithms, the proposed algorithm reduces the search time by 38.06%, reduces the maximum curvature by 25.2%, and increases the closest distance from the path to the obstacle by 51.3%. Thus, the proposed method effectively improves the quality of path generation of the hybrid A* algorithm and can operate well in limited scenarios.

  • Artificial Intelligence and Pattern Recognition
    SUN Ziwen, QIAN Lizhi, YUAN Guanglin, YANG Chuandong, LING Chong
    Computer Engineering. 2025, 51(4): 158-168. https://doi.org/10.19678/j.issn.1000-3428.0068892

    Transformer-based object tracking methods are widely used in the field of computer vision and have achieved excellent results. However, object transformations, object occlusion, illumination changes, and rapid object motion can change object information during actual tracking tasks, and consequently, the underutilization of object template change information in existing methods prevents the tracking performance from improving. To solve this problem, this paper presents a Transformer object tracking method, TransTRDT, based on real-time dynamic template update. A dynamic template updating branch is attached to reflect the latest appearance and motion state of an object. The branch determines whether the template is updated through the template quality scoring header; when it identifies the possibility of an update, it passes the initial template, the dynamic template of the previous frame, and the latest prediction after cropping into the dynamic template updating network to update the dynamic template. As a result, the object can be tracked more accurately by obtaining a more reliable template. The tracking performance of TransTRDT on GOT-10k, LsSOT, and TrackingNet is superior to algorithms such as SwinTrack and StarK. It outperforms to achieve a tracking success rate of 71.9% on the OTB100 dataset, with a tracking speed of 36.82 frames per second, reaching the current leading level in the industry.

  • Research Hotspots and Reviews
    CI Tianzhao, YANG Hao, ZHOU You, XIE Changsheng, WU Fei
    Computer Engineering. 2025, 51(3): 1-23. https://doi.org/10.19678/j.issn.1000-3428.0068673

    Smartphones have become an integral part of modern daily life. The Android operating system currently holds the largest market share in the mobile operating system market owing to its open-source nature and comprehensive ecosystem. Within Android smartphones, the storage subsystem plays a pivotal role, exerting a significant influence on the user experience. However, the design of Android mobile storage systems diverges from server scenarios, necessitating the consideration of distinct factors, such as resource constraints, cost sensitivity, and foreground application prioritization. Extensive research has been conducted in this area. By summarizing and analyzing the current research status in this field, we categorize the issues experienced by users of Android smartphone storage systems into five categories: host-side writing amplification, memory swapping, file system fragmentation, flash device performance, and I/O priority inversion. Subsequently, existing works addressing these five categories of issues are classified, along with commonly used tools for testing and analyzing mobile storage systems. Finally, we conclude by examining existing techniques that ensure the user experience with Android smartphone storage systems and discuss potential avenues for future investigation.

  • Research Hotspots and Reviews
    JIANG Qiqi, ZHANG Liang, PENG Lingqi, KAN Haibin
    Computer Engineering. 2025, 51(3): 24-33. https://doi.org/10.19678/j.issn.1000-3428.0069378

    With the advent of the big data era, the proliferation of information types has increased the requirements for controlled data sharing. Decentralized Attribute-Based Encryption (DABE) has been widely studied in this context to enable fine-grained access control among multiple participants. However, the Internet of Things (IoT) data sharing scenario has become mainstream and requires more data features, such as cross-domain access, transparency, trustworthiness, and controllability, whereas traditional Attribute-Based Encryption (ABE) schemes pose a computational burden on resource-constrained IoT devices. To solve these problems, this study proposes an accountable and verifiable outsourced hierarchical attribute-based encryption scheme based on blockchain to support cross-domain data access and improve the transparency and trustworthiness of data sharing using blockchain. By introducing the concept of Verifiable Credential (VC), this scheme addresses the issue of user identity authentication and distributes the burden of complex encryption and decryption processes to outsourced computing nodes. Finally, using a hierarchical structure, fine-grained data access control is achieved. A security analysis has demonstrated that the proposed scheme can withstand chosen-plaintext attacks. Simulation results on small IoT devices with limited resources using Docker have shown that the proposed scheme has a lower computational overhead than existing schemes. For up to 30 attributes, the computation costs have not exceeded 2.5 s for any of the algorithms, and the average cost is approximately 1 s, making the scheme suitable for resource-constrained IoT devices.

  • Artificial Intelligence and Pattern Recognition
    DAI Kangjia, XU Huiying, ZHU Xinzhong, LI Xiyu, HUANG Xiao, CHEN Guoqiang, ZHANG Zhixiong
    Computer Engineering. 2025, 51(3): 95-104. https://doi.org/10.19678/j.issn.1000-3428.0068950

    Traditional vision Simultaneous Localization And Mapping(SLAM) systems are based on the assumption of a static environment. However, real scenes often have dynamic objects, which may lead to decreased accuracy, deterioration of robustness, and even tracking loss in SLAM position estimation and map construction. To address these issues, this study proposes a new semantic SLAM system, named YGL-SLAM, based on ORB -SLAM2. The system first uses a lightweight target detection algorithm named YOLOv8n, to track dynamic objects and obtain their semantic information. Subsequently, both point and line features are extracted from the tracking thread, and the dynamic features are culled based on the acquired semantic information using the Z-score and parapolar geometry algorithms to improve the performance of SLAM in dynamic scenes. Given that lightweight target detection algorithms suffer from missed detection in consecutive frames when tracking dynamic objects, this study designs a detection compensation method based on neighboring frames. Testing on the public datasets TUM and Bonn reveals that YGL-SLAM system improves detection performance by over 90% compared to ORB-SLAM2, while demonstrating superior accuracy and robustness compared to other dynamic SLAM.

  • Graphics and Image Processing
    ZHAO Hong, SONG Furong, LI Wengai
    Computer Engineering. 2025, 51(2): 300-311. https://doi.org/10.19678/j.issn.1000-3428.0068481

    Adversarial examples are crucial for evaluating the robustness of Deep Neural Network (DNN) and revealing their potential security risks. The adversarial example generation method based on a Generative Adversarial Network (GAN), AdvGAN, has made significant progress in generating image adversarial examples; however, the sparsity and amplitude of the perturbation generated by this method are insufficient, resulting in lower authenticity of adversarial examples. To address this issue, this study proposes an improved image adversarial example generation method based on AdvGAN, Squeeze-and-Excitation (SE)-AdvGAN. SE-AdvGAN improves the sparsity of perturbation by constructing an SE attention generator and an SE residual discriminator. The SE attention generator is used to extract the key features of an image and limit the position of perturbation generation. The SE residual discriminator guides the generator to avoid generating irrelevant perturbation. Moreover, a boundary loss based on l2 norm is added to the loss function of the SE attention generator to limit the amplitude of perturbation, thereby improving the authenticity of adversarial examples. The experimental results indicate that in the white box attack scenario, the SE-AdvGAN method has higher sparsity and smaller amplitude of adversarial example perturbation compared to existing methods and achieves better attack performance on different target models. This indicates that the high-quality adversarial examples generated by SE-AdvGAN can more effectively evaluate the robustness of DNN.

  • Computer Architecture and Software Technology
    ZHANG Ming, GUO Wenkang, WANG Haifeng
    Computer Engineering. 2025, 51(3): 197-207. https://doi.org/10.19678/j.issn.1000-3428.0068477

    Graphics Processing Unit (GPU) is not fully utilized when processing large-scale dynamic graphs, and the limitations of GPU-oriented graph partitioning methods lead to performance bottlenecks. To improve the performance of graph computing, a Central Processing Unit (CPU)/GPU Distributed Heterogeneous Engine (DH-Engine) is proposed to improve the performance of heterogeneous processors. First, a new heterogeneous graph partitioning algorithm is proposed. It uses a streaming algorithm for graph partitioning as the core to achieve dynamic load balancing between the computing nodes and between the CPU and GPU. The greedy strategy assigns vertices based on the maximum number of neighboring vertices during the initial graph partitioning and dynamically adjusts the vertex position based on the minimum number of connected edges during the iteration. Second, the system introduces a GPU heterogeneous computing model to improve graph computing efficiency through functional parallelism. The experiment used PageRank, Connected Components(CC), Single-Source Shortest Path(SSSP), and k-core as examples to conduct comparative experiments with other graph computing systems. Compared with other graph engines, DH-Engine can better balance the computing load of each node and the load between heterogeneous processors to shorten the delay and accelerate the overall computing speed. The results show that the CPU/GPU synergy of this system tends to 1, and the heterogeneous computing has speedup ratio of 5 times compared to other graph computing systems. DH-Engine provides an improved heterogeneous graph scheme.

  • Graphics and Image Processing
    LIU Shengjie, HE Ning, WANG Xin, YU Haigang, HAN Wenjing
    Computer Engineering. 2025, 51(2): 278-288. https://doi.org/10.19678/j.issn.1000-3428.0068375

    Human pose estimation is widely used in multiple fields, including sports fitness, gesture control, unmanned supermarkets, and entertainment games. However, pose-estimation tasks face several challenges. Considering the current mainstream human pose-estimation networks with large parameters and complex calculations, LitePose, a lightweight pose-estimation network based on a high-resolution network, is proposed. First, Ghost convolution is used to reduce the parameters of the feature extraction network. Second, by using the Decoupled Fully Connected (DFC) attention module, the dependence relationship between pixels in the far distance space position is better captured and the loss in feature extraction due to decrease in parameters is reduced. The accuracy of human pose keypoint regression is improved, and a feature enhancement module is designed to further enhance the features extracted by the backbone network. Finally, a new coordinate decoding method is designed to reduce the error in the heatmap decoding process and improve the accuracy of keypoint regression. LitePose is validated on the human critical point detection datasets COCO and MPII and compared with current mainstream methods. The experimental results show that LitePose loses 0.2% accuracy compared to the baseline network HRNet; however, the number of parameters is less than one-third of the baseline network. LitePose can significantly reduce the number of parameters in the network model while ensuring minimal accuracy loss.

  • Artificial Intelligence and Pattern Recognition
    SUN Haomiao, LI Zongmin, XIAO Qian, SUN Wenjie, ZHANG Wenxin
    Computer Engineering. 2025, 51(2): 102-110. https://doi.org/10.19678/j.issn.1000-3428.0069106

    In response to the need for intelligent curling training, a new on-site curling decision-making method that combines computer vision and deep Reinforcement Learning (RL) technologies, Artificial Intelligence (AI)-Curling, is proposed. AI-Curling comprises two components: SR-Yolo for curling detection and Global Strategy Perception-Monte Carlo Tree Search (GSP-MCTS) for strategy generation. The former is responsible for sensing the state of the curling stones at critical moments and extracting information on the location and type of stones in real scenes. To improve the detection accuracy of small targets in large scenes and prevent feature loss due to inappropriate downsampling, a Shallow Refinement Backbone Network (SRNet) is introduced to capture richer feature information by adding layers during the initial stages of the network. An Adaptive Feature Optimization Fusion (AFOF) module is further introduced into the multiscale fusion network to increase the number of effective samples in each layer, thereby preventing small-scale targets from being submerged in complex backgrounds and noise. In the strategy generation module, curling match decision analysis is implemented using a combination of the MCTS algorithm and policy value network. A GSP module is embedded into the policy value network to enhance network spatial perception by introducing a kernel function to deal with action space continuity and execution uncertainty. In the experiments, SR-Yolo achieved 0.974 mAP@0.5 on the standard Curling dataset and 0.723 mAP@0.5 on the more complex obstructed Curling_hard dataset. In addition, GSP-MCTS achieved a 62% winning percentage compared with the latest real-scene curling model Curling MCTS, indicating that GSP-MCTS has superior performance.

  • Research Hotspots and Reviews
    YUAN Yajian, MAO Li
    Computer Engineering. 2025, 51(3): 54-63. https://doi.org/10.19678/j.issn.1000-3428.0069042

    Traffic sign detection is crucial for assisted driving and plays a vital role in ensuring driving safety. However, in real-world traffic environments, factors such as darkness and rain create background noise that complicates the detection process. In addition, existing models often struggle to effectively detect small traffic signs from a distance. Furthermore, when a traffic sign detection model is designed, the model size must be considered for practical deployment. To address these challenges, this study proposes a lightweight traffic sign detection model based on YOLOv8 with enhanced foregrounds. First, a lightweight PC2f module is designed to replace a part of the C2f module in the original Backbone. This modification reduces the number of parameters and computational load, enriches the gradient flow, retains more shallow information, and ultimately enhances detection performance while maintaining a lightweight design. Next, the study designs a Foreground Enhancement Module (FEM) and incorporates it into the Neck position to effectively amplify the foreground information and reduce background noise. Finally, the study adds a small-target detection layer to extract shallow features from high-resolution images, thereby improving the ability of the model to detect small-target traffic signs. Experimental results show that the optimized model achieves a mAP50 of 82.5% and 95.3% on the CCTSDB 2021 and GTSDB datasets, which is an improvement of 3.6 and 1 percentage points over the original model, respectively, with a reduction in model weight size by 0.22×106. These results confirm the effectiveness of the proposed model for practical applications.

  • Research Hotspots and Reviews
    MAO Jingzheng, HU Xiaorui, XU Gengchen, WU Guodong, SUN Yanbin, TIAN Zhihong
    Computer Engineering. 2025, 51(2): 1-17. https://doi.org/10.19678/j.issn.1000-3428.0068374

    Industrial Control System (ICS) that utilizes Digital Twin (DT) technology plays a critical role in enhancing system security, ensuring stable operations, and optimizing production efficiency. The application of DT technology in the field of industrial control security primarily focuses on two key areas: security situation awareness and industrial cyber ranges. DT-based security situation awareness facilitates real-time monitoring, anomaly detection, vulnerability analyses, and threat identification while enabling a visualized approach to managing system security. Similarly, industrial cyber ranges powered by DT technology act as strategy validation platforms, supporting attack-defense simulations for ICSs, assessing the effectiveness of security strategies, enhancing the protection of critical infrastructure, and providing robust training support for personnel. This study analyzes the current security landscape of ICS and advancements in applying DT technology to enhance ICS security situation awareness, with particular emphasis on the technology's contributions to risk assessment. Furthermore, the study explores the optimization capabilities of the DT-based industrial cyber ranges for bolstering ICS security. Through a case study of intelligent power grids, this study validates the critical role of DT technology in ICS security. Finally, the study discusses future directions for the development of DT technology within the ICS security domain.

  • Research Hotspots and Reviews
    MA Hengzhi, QIAN Yurong, LENG Hongyong, WU Haipeng, TAO Wenbin, ZHANG Yiyang
    Computer Engineering. 2025, 51(2): 18-34. https://doi.org/10.19678/j.issn.1000-3428.0068386

    With the continuous development of big data and artificial intelligence technologies, knowledge graph embedding is developing rapidly, and knowledge graph applications are becoming increasingly widespread. Knowledge graph embedding improves the efficiency of knowledge representation and reasoning by representing structured knowledge into a low-dimensional vector space. This study provides a comprehensive overview of knowledge graph embedding technology, including its basic concepts, model categories, evaluation indices, and application prospects. First, the basic concepts and background of knowledge graph embedding are introduced, classifying the technology into four main categories: embedding models based on translation mechanisms, semantic- matching mechanisms, neural networks, and additional information. The core ideas, scoring functions, advantages and disadvantages, and application scenarios of the related models are meticulously sorted. Second, common datasets and evaluation indices of knowledge graph embedding are summarized, along with application prospects, such as link prediction and triple classification. The experimental results are analyzed, and downstream tasks, such as question-and-answer systems and recommenders, are introduced. Finally, the knowledge graph embedding technology is reviewed and summarized, outlining its limitations and the primary existing problems while discussing the opportunities and challenges for future knowledge graph embedding along with potential research directions.

  • Artificial Intelligence and Pattern Recognition
    ZHANG Guosheng, LI Caihong, ZHANG Yaoyu, ZHOU Ruihong, LIANG Zhenying
    Computer Engineering. 2025, 51(1): 88-97. https://doi.org/10.19678/j.issn.1000-3428.0068738

    This study proposes an improved Artificial Potential Field (APF) algorithm (called FC-V-APF) based on Fuzzy Control (FC) and a virtual target point method to solve the local minimum trap and path redundancy issues of the APF method in robot local path planning. First, a virtual target point obstacle avoidance strategy is designed, and the V-APF algorithm is constructed to help the robot overcome local minimum traps by adding an obstacle crossing mechanism and a target point update threshold. Second, a control strategy based on the cumulative angle sum is proposed to assist the robot in exiting a multi-U complex obstacle area. Subsequently, the V-APF and FC algorithms are combined to construct the FC-V-APF algorithm. The corresponding environment is evaluated using real-time data from the radar sensor and designed weight function, and a fuzzy controller is selected to output the auxiliary force to avoid obstacles in advance. Finally, a simulation environment is built on the Robot Operating System (ROS) platform to compare the path planning performance of the FC-V-APF algorithm with that of other algorithms. Considering path length, running time, and speed curves, the designed FC-V-APF algorithm can quickly eliminate traps, reduce redundant paths, improve path smoothness, and reduce planning time.

  • Graphics and Image Processing
    ZHAO Nannan, GAO Feichen
    Computer Engineering. 2025, 51(1): 198-207. https://doi.org/10.19678/j.issn.1000-3428.0068677

    An instance segmentation algorithm (DE-YOLO) based on the improved YOLOv8 is proposed. To decrease the effect of complex backgrounds in the images, efficient multiscale attention is introduced, and cross-dimensional interaction ensures an even spatial feature distribution within each feature group. In the backbone network, a deformable convolution using DCNv2 is combined with a C2f convolutional layer to overcome the limitations of traditional convolutions and increase flexibility. This is performed to reduce harmful gradient effects and improve the overall accuracy of the detector. The dynamic nonmonotonic Wise-Intersection-over-Union (WIoU) focusing mechanism is employed instead of the traditional Complete Intersection-over-Union (CIoU) loss function to evaluate the quality, optimize detection frame positioning, and improve segmentation accuracy. Meanwhile, Mixup data enhancement processing is enabled to enrich the training features of the dataset and improve the learning ability of the model. The experimental results demonstrate that DE-YOLO improves the mean Average Precision of mask(mAPmask) and mAPmask@0.5 by 2.0 and 3.2 percentage points compared with the benchmark model YOLOv8n-seg in the Cityscapes dataset of urban landscapes, respectively. Furthermore, DE-YOLO maintains an excellent detection speed and small parameter quantity while exhibiting improved accuracy, with the model requiring 2.2-31.3 percentage points fewer parameters than similar models.

  • Artificial Intelligence and Pattern Recognition
    SONG Yinghua, XU Yaan, ZHANG Yuanjin
    Computer Engineering. 2025, 51(1): 51-59. https://doi.org/10.19678/j.issn.1000-3428.0068372

    Air pollution is one of the primary challenges in urban environmental governance, with PM2.5 being a significant contributor that affects air quality. As the traditional time-series prediction models for PM2.5 often lack seasonal factor analysis and sufficient prediction accuracy, a fusion model based on machine learning, Seasonal Autoregressive Integrated Moving Average (SARIMA)-Support Vector Machine (SVM), is proposed in this paper. The fusion model is a tandem fusion model, which splits the data into linear and nonlinear parts. Based on the Autoregressive Integral Moving Average (ARIMA) model, the SARIMA model adds seasonal factor extraction parameters, to effectively analyze and predict the future linear seasonal trend of PM2.5 data. Combined with the SVM model, the sliding step size prediction method is used to determine the optimal prediction step size for the residual series, thereby optimizing the residual sequence of the predicted data. The optimal model parameters are further determined through grid search, leading to the long-term predictions of PM2.5 data and improves overall prediction accuracy. The analysis of the PM2.5 monitoring data in Wuhan for the past five years shows that prediction accuracy of the fusion model is significantly higher than that of the single model. In the same experimental environment, the accuracy of the fusion model is improved by 99%, 99%, and 98% compared with those of ARIMA, Auto ARIMA, and SARIMA models, respectively and the stability of the model is also better, thus providing a new direction for the prediction of PM2.5.

  • Development Research and Engineering Application
    LI Mengkun, YUAN Chen, WANG Qi, ZHAO Chong, CHEN Jingxuan, LIU Lifeng
    Computer Engineering. 2025, 51(1): 287-294. https://doi.org/10.19678/j.issn.1000-3428.0068656

    Target detection technology is advancing, but recognizing online listening behavior remains a challenge. Inaccurate identification of online classroom conduct and high model computation owing to limited human supervision and complex target detection models pose problems. To address this, we employed an upgraded YOLOv8-based method to detect and identify online listening behaviors. This approach incorporates a Bidirectional Feature Pyramid Network (BiFPN) to fuse features based on YOLOv8n, thereby enhancing feature extraction and model recognition accuracy. Second, the C3Ghost module is selected over the C2f module on the Head side to minimize the computational burden significantly. The study demonstrates that the YOLOv8n-BiFPN-C3Ghost model achieved an mAP@0.5 score of 98.6% and an mAP@0.5∶0.95 score of 92.6% on an online listening behavior dataset. The proposed model enhanced the accuracy by 4.2% and 5.7%, respectively, compared with other classroom behavior recognition models. Moreover, the required computation amount is only 6.6 GFLOPS, which is 19.5% less than that of the original model. The YOLOv8n-BiFPN-C3Ghost model is capable of detecting and recognizing online listening behavior with greater speed and accuracy while utilizing lower computing costs. This will ultimately enable the dynamic and scientific recognition of online classroom learning among students.

  • Image Processing Based on Perceptual Information
    ZHOU Yu, XIE Wei, Kwong Tak Wu, JIANG Jianmin
    Computer Engineering. 2025, 51(1): 20-30. https://doi.org/10.19678/j.issn.1000-3428.0069369

    Video Snapshot Compressive Imaging (SCI) is a computational imaging technique that achieves efficient imaging through hybrid compression in both temporal and spatial domains. In video SCI, the sparsity of the signal and its correlations in the temporal and spatial domains can be exploited to effectively reconstruct the original video signal using appropriate video snapshot SCI algorithms. Although recent deep learning-based reconstruction algorithms have achieved state-of-the-art results in many tasks, they still face challenges related to excessive model complexity and slow reconstruction speeds. To address these issues, this research proposes a reconstruction network model for SCI based on triple self-attention, called SCT-SCI. It employs a multibranch-grouped self-attention mechanism to leverage the correlation in the spatial and temporal domains. The SCT-SCI model comprises a feature extraction module, a video reconstruction module, and a triple self-attention module, called SCT-Block. Each SCT-Block comprises a window self-attention branch, a channel self-attention branch, and a temporal self-attention branch. Additionally, it introduces a spatial fusion module, called SC-2DFusion, and a global fusion module, called SCT-3DFusion, to enhance feature fusion. The experimental results show that on the simulated video dataset, the proposed model demonstrates an advantage in low complexity. It saves 31.58% of the reconstruction time compared to the EfficientSCI model, while maintaining a similar reconstruction quality, thus improving real-time performance.

  • Research Hotspots and Reviews
    REN Shuyu, WANG Xiaoding, LIN Hui
    Computer Engineering. 2024, 50(12): 16-32. https://doi.org/10.19678/j.issn.1000-3428.0068553

    The superior performance of Transformer in natural language processing has inspired researchers to explore their applications in computer vision tasks. The Transformer-based object detection model, Detection Transformer (DETR), treats object detection as a set prediction problem, introducing the Transformer model to address this task and eliminating the proposal generation and post-processing steps that are typical of traditional methods. The original DETR model encounters issues related to slow training convergence and inefficiency in detecting small objects. To address these challenges, researchers have implemented various improvements to enhance DETR performance. This study conducts an in-depth investigation of both the basic and enhanced modules of DETR, including modifications to the backbone architecture, query design strategies, and improvements to the attention mechanism. Furthermore, it provides a comparative analysis of various detectors and evaluates their performance and network architecture. The potential and application prospects of DETR in computer vision tasks are discussed herein, along with its current limitations and challenges. Finally, this study analyzes and summarizes related models, assesses the advantages and limitations of attention models in the context of object detection, and outlines future research directions in this field.

  • Research Hotspots and Reviews
    LI Shuo, ZHAO Chaoyang, QU Yinxuan, LUO Yaping
    Computer Engineering. 2024, 50(12): 33-47. https://doi.org/10.19678/j.issn.1000-3428.0068276

    Fingerprint recognition is one of the earliest and most mature biometric recognition technologies that is widely used in mobile payments, access control and attendance in the civilian field, and in criminal investigation to retrieve clues from suspects. Recently, deep learning technology has achieved excellent application results in the field of biometric recognition, and provided fingerprint researchers with new methods for automatic processing and the application of fusion features to effectively represent fingerprints, which have excellent application results at all stages of the fingerprint recognition process. This paper outlines the development history and application background of fingerprint recognition, expounds the main processing processes of the three stages of fingerprint recognition, which are image preprocessing, feature extraction, and fingerprint matching, summarizes the application status of deep learning technology in specific links at different stages, and compares the advantages and disadvantages of different deep neural networks in specific links, such as image segmentation, image enhancement, direction field estimation, minutiae extraction, and fingerprint matching. Finally, some of the current problems and challenges in the field of fingerprint recognition are analyzed, and future development directions, such as building public fingerprint datasets, multi-scale fingerprint feature extraction, and training end-to-end fingerprint recognition models, are prospected.

  • Graphics and Image Processing
    ZHANG Xu, CHEN Cifa, DONG Fangmin
    Computer Engineering. 2024, 50(12): 318-328. https://doi.org/10.19678/j.issn.1000-3428.0068588

    Achieving enhanced detection accuracy is a challenging task in the field of PCB defect detection. To address this problem, this study proposes a series of improvement methods based on PCB defect detection. First, a novel attention mechanism, referred to as BiFormer, is introduced. This mechanism uses dual-layer routing to achieve dynamic sparse attention, thereby reducing the amount of computation required. Second, an innovative upsampling operator called CARAFE is employed. This operator combines semantic and content information for upsampling, thereby making the upsampling process more comprehensive and efficient. Finally, a new loss function based on the MPDIoU metric, referred to as the LMPDIoU loss function, is adopted. This loss function effectively addresses unbalanced categories, small targets, and denseness problems, thereby further improving image detection performance. The experimental results reveal that the model achieves a significant improvement in mean Average Precision (mAP) with a score of 93.91%, 13.12 percentage points higher than that of the original model. In terms of recognition accuracy, the new model reached a score of 90.55%, representing an improvement of 8.74 percentage points. These results show that the introduction of the BiFormer attention mechanism, CARAFE upsampling operator, and LMPDIoU loss function effectively improves the accuracy and efficiency of PCB defect detection. Thus, the proposed methods provide valuable references for research in industrial inspection, laying the foundation for future research and applications.

  • Research Hotspots and Reviews
    PANG Wenhao, WANG Jialun, WENG Chuliang
    Computer Engineering. 2024, 50(12): 1-15. https://doi.org/10.19678/j.issn.1000-3428.0068694

    In the context of big data, the rapid advancement of fields such as scientific computing and artificial intelligence, there is an increasing demand for high computational power across various domains. The unique hardware architecture of the Graphics Processing Unit (GPU) makes it suitable for parallel computing. In recent years, the concurrent development of GPUs and fields such as artificial intelligence and scientific computing has enhanced GPU capabilities, leading to the emergence of mature General-Purpose Graphics Processing Units (GPGPUs). Currently, GPGPUs are one of the most important co-processors for Central Processing Units (CPUs). However, the fixed hardware configuration of the GPU after delivery and its limited memory capacity can significantly hinder its performance, particularly when dealing with large datasets. To address this issue, Compute Unified Device Architecture (CUDA) 6.0 introduces unified memory, allowing GPGPU and CPU to share a virtual memory space, thereby simplifying heterogeneous programming and expanding the GPGPU-accessible memory space. Unified memory offers a solution for processing large datasets on GPGPUs and alleviates the constraints of limited GPGPU memory capacity. However, the use of unified memory introduces performance issues. Effective data management within unified memory is the key to enhancing performance. This article provides an overview of the development and application of CUDA unified memory. It covers topics such as the features and evolution of unified memory, its advantages and limitations, its applications in artificial intelligence and big data processing systems, and its prospects. This article provides a valuable reference for future work on applying and optimizing CUDA unified memory.

  • Graphics and Image Processing
    CHEN Zimin, GUAN Zhitao
    Computer Engineering. 2024, 50(12): 296-305. https://doi.org/10.19678/j.issn.1000-3428.0068512

    Deep-learning models have achieved impressive results in fields such as image classification; however, they remain vulnerable to interference and threats from adversarial examples. Attackers can craft small perturbations using various attack algorithms to create adversarial examples that are visually indistinguishable yet can lead to misclassification in deep neural networks, posing significant security risks to image classification tasks. To improve the robustness of these models, we propose an adversarial-example defense method that combines adversarial detection and purification using a conditional diffusion model, while preserving the structure and parameters of the target model during detection and purification. This approach features two key modules: adversarial detection and adversarial purification. For adversarial detection, we employ an inconsistency enhancement technique, training an image restoration model that integrates both the high-dimensional features of the target model and basic image features. By comparing the inconsistencies between the initial input and the restored output, adversarial examples can be detected. An end-to-end adversarial purification method is then applied, introducing image artifacts during the denoising process. An adversarial detection and purification module is placed before the target model to ensure its accuracy. Based on detection outcomes, appropriate purification strategies are implemented to remove adversarial examples and improve model robustness. The method was compared with recent adversarial detection and purification approaches on the CIFAR10 and CIFAR100 datasets, using five adversarial attack algorithms to generate adversarial examples. It demonstrated a 5-9 percentage points improvement in detection accuracy over Argos on both datasets in a low-purification setting. Additionally, it exhibited a more stable defense performance than Adaptive Denoising Purification(ADP), with a 1.3 percentage points higher accuracy under Backwards Pass Differentiable Approximation(BPDA) attacks.

  • Intelligent Situational Awareness and Computing
    GUO Shangwei, LIU Shufeng, LI Ziming, OUYANG Deqiang, WANG Ning, XIANG Tao
    Computer Engineering. 2024, 50(11): 1-9. https://doi.org/10.19678/j.issn.1000-3428.0069758

    Cybersecurity threats are becoming increasingly prevalent with the rapid advancement of Internet technologies. Cyberattacks exhibiting high complexity and diversity, are posing significant challenges to existing defense mechanisms. As an emerging concept, situation awareness technology offers new approaches to enhancing cybersecurity defense. However, the current cybersecurity situation awareness methods suffer from limited data feature extraction capabilities and inadequate handling of long-term sequential data. To address these issues, this study proposes a fusion model that integrates Stack Sparse Auto-Encoder (SSAE), Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Unit (BiGRU), and Attention Mechanism (AM). By utilizing SSAE and CNN to extract data features and enhancing the focus on critical information through the AM in the BiGRU model, the proposed model aims to classify the attack categories of abnormal traffic. In conjunction with the network security situational quantification indicators proposed in this study, the network security situation is quantitatively evaluated and classified. The experimental results demonstrate that the proposed fusion model outperforms traditional deep learning models in various metrics, enabling an accurate perception of the network situation.

  • Development Research and Engineering Application
    XIE Jing, DENG Yueming, WANG Runmin
    Computer Engineering. 2024, 50(11): 338-349. https://doi.org/10.19678/j.issn.1000-3428.0068742

    Due to low detection accuracy for small targets in complex environments, along with false and missed detections in mainstream traffic sign detection algorithms, an improved algorithm based on YOLOv8s is proposed. This algorithm uses Pconv convolution in the backbone network and incorporates a C2faster module to achieve a lightweight network structure while maintaining network accuracy. In addition, to better utilize the information between low- and high-level features and enhance the regional context association ability, the SPPFCSPC module is designed as a spatial pyramid pooling module based on the concept of SPPF. In addition, by adding the GAM attention mechanism, the feature extraction capability of the network is further enhanced, and the detection accuracy is effectively improved. To improve the detection ability of small targets, a four-fold downsampling branch is added at the neck of the network to optimize target positioning. In addition, the Focal-EIoU loss function is used to replace the original CIoU loss function to accurately define the aspect ratio of the prediction box, which alleviates the problem of imbalance between the positive and negative samples. Experimental results show that on the CCTSDB-2021 traffic sign dataset, the improved algorithm achieved 86.1%, 73.0%, and 81.2% precision, recall, and mAP@0.5, respectively. Compared with the original YOLOv8s algorithm, increases of 0.8%, 6.3%, and 6.9% were observed, respectively. This algorithm significantly reduces false and missed detections in complex weather and harsh environments, offering better overall detection performance than the comparison algorithm, with strong practical value.

  • Intelligent Situational Awareness and Computing
    BI Qian, QIAN Cheng, ZHANG Ke, WANG Cheng
    Computer Engineering. 2024, 50(11): 10-17. https://doi.org/10.19678/j.issn.1000-3428.0069710

    In intelligent situational awareness application scenarios, multi-agent angle tracking problems often occur when moving targets must be monitored and controlled. In contrast to traditional target tracking, the angle tracking task entails not only tracking the spatial coordinates of the target, but also determining the relative angles between targets. Existing control methods often exhibit unstable effects and reduced performance when addressing large-scale problems that are susceptible to environmental changes. To address this problem, the present study proposes a solution scheme based on Multi-Agent Reinforcement Learning(MARL). First, a basic model of the multi-agent angle tracking problem is established, a multi-level simulation decision-making framework is designed, and an adaptive method is proposed for this problem. As a stronger multi-agent reinforcement learning algorithm, AR-MAPPO enhances learning efficiency and model stability by dynamically adjusting the number of data reuse rounds. The experimental results show that the proposed method achieves higher convergence efficiency and better angle tracking performance than traditional methods and other reinforcement learning methods in multi-agent angle tracking tasks.

  • Artificial Intelligence and Pattern Recognition
    NI Yuan, LIAO Shihao, ZHANG Jian
    Computer Engineering. 2024, 50(11): 119-129. https://doi.org/10.19678/j.issn.1000-3428.0068258

    Natural Language Processing (NLP) models the Chinese Named Entity Recognition (NER) task as a sequence annotation task and maps each character in the text to a label. Each character is relatively independent and has limited information. Therefore, the addition of vocabulary information to the NER field can solve the problem of the lack of connections between characters. To address the challenges of existing Chinese NER models that require additional vocabulary construction, employ a cumbersome extraction process of vocabulary information, and have difficulties integrating information due to different sources of word-level embedding, this study proposes a Chinese NER model based on Wobert and adversarial learning named ALWAE-BiLSTM-CRF. Unlike traditional pre-training models, the Wobert pre-training model segments the text in advance (i.e., during the pre-training stage), thereby fully learning information at both the word and character levels. Accordingly, the proposed model obtains character word vectors through the Wobert pre-training model and then uses the Wobert word splitter to obtain the existing vocabulary vector in the pre-training model. The proposed model next uses the BiLSTM model to obtain the temporal information of the two. The model then utilizes a multi-head attention mechanism to integrate vocabulary-level information elements into the character word vector while simultaneously generating adversarial samples through adversarial learning attacks to enhance model generalization. Finally, the proposed model utilizes a Conditional Random Field (CRF) layer to constrain the results and obtain the best prediction sequence. The study conducted comparative and ablation experiments on the Resume and self-built Porcelain datasets in the field of porcelains, the results show that the ALWAE-BiLSTM-CRF model achieves 97.21% and 85.7% F1 values on the two datasets, proving its effectiveness in the Chinese NER task.

  • Development Research and Engineering Application
    CUI Jinying, LIANG Lihe, REN Xueting, QIANG Yan, ZHAO Juanjuan, KONG Xiaomei, YU Xiao, ZHANG Hua
    Computer Engineering. 2024, 50(11): 350-359. https://doi.org/10.19678/j.issn.1000-3428.0068379

    In training medical image noise annotation data, the prevailing approach involves partitioning the noise-labeled dataset based on training loss to filter out the noise-labeled samples. However, this method faces two pressing issues that require resolution: first, filtering out noise samples while retaining difficult samples with similar loss distributions as much as possible, and second, enhancing sample utilization and uncovering valuable information embedded in noise samples to alleviate model overfitting. This study proposes a Sample Distribution Guided Noise Robust Learning strategy (SGRL) comprising sample partitioning and semi-supervised contrastive classification to address these challenges. A straightforward yet effective sample selection method called a noise filter method is introduced to distinguish informative, difficult samples from detrimental noise samples more accurately. Additionally, an enhanced matching contrastive network is proposed to train using all samples, yielding a noise-robust classification model. Contrastive learning is utilized as a supplement to counter the memorization of noise labels and improve screening accuracy. The experimental results demonstrate significant performance improvement of the proposed method across dust-induced pneumoconiosis chest X-ray datasets with noise ratios of 5%, 10%, 20%, and 40%. Compared with existing state-of-the-art methods, the screening accuracy of this method increased by an average of 5.88, 7.05, 7.59, and 6.19 percentage points, validating the effectiveness of the proposed improvement method.