Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Most Downloaded

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year

Please wait a minute...
  • Select all
    |
  • Cyberspace Security
    WU Ruolan, CHEN Yuling, DOU Hui, ZHANG Yangwen, LONG Zhong
    Computer Engineering. 2025, 51(2): 179-187. https://doi.org/10.19678/j.issn.1000-3428.0068705
    Abstract (634) Download PDF (10436) HTML (34)   Knowledge map   Save

    Federated learning is an emerging distributed learning framework that facilitates the collective engagement of multiple clients in global model training without sharing raw data, thereby effectively safeguarding data privacy. However, traditional federated learning still harbors latent security vulnerabilities that are susceptible to poisoning and inference attacks. Therefore, enhancing the security and model performance of federated learning has become imperative for precisely identifying malicious client behavior by employing gradient noise as a countermeasure to prevent attackers from gaining access to client data through gradient monitoring. This study proposes a robust federated learning framework that combines mechanisms for malicious client detection with Local Differential Privacy (LDP) techniques. The algorithm initially employs gradient similarity to identify and classify potentially malicious clients, thereby minimizing their adverse impact on model training tasks. Subsequently, a dynamic privacy budget based on LDP is designed, to accommodate the sensitivity of different queries and individual privacy requirements, with the objective of achieving a balance between privacy preservation and data quality. Experimental results on the MNIST, CIFAR-10, and Movie Reviews (MR) text classification datasets demonstrate that compared to the three baseline algorithms, this algorithm results in an average 3 percentage points increase in accuracy for sP-type clients, thereby achieving a higher security level with significantly enhanced model performance within the federated learning framework.

  • Research Hotspots and Reviews
    SUN Lijun, MENG Fanjun, XU Xingjian
    Computer Engineering. 2025, 51(11): 1-21. https://doi.org/10.19678/j.issn.1000-3428.0069543

    In the context of ongoing advancements in educational informatization, constructing precise and efficient curriculum knowledge graphs has become key to promoting personalized education development. As a structured knowledge representation model, curriculum knowledge graphs reveal complex relations between curriculum content and learning objectives to optimize the allocation of educational resources, and tailoring personalized learning paths for learners. This survey presents a discussion around the techniques used to construct curriculum knowledge graphs, starting with an explanation of the basic concepts; intrinsic connections; and significant differences among general, educational, and curriculum knowledge graphs. It then delves into the key technologies used for building curriculum knowledge graphs, covering aspects such as curriculum ontology design, entity extraction, and relation extraction, and provides a detailed analysis and summary of their evolution, key features, and limitations. Furthermore, it explores the application value of curriculum knowledge graphs in scenarios such as learning resource recommendation, learner behavior profile and modeling, and multimodal curriculum knowledge graph construction. Finally, it focuses on the challenges in constructing curriculum knowledge graphs, such as data diversity and heterogeneity, difficulties in quality evaluation, and the lack of cross-curriculum integration, and provides future-oriented insights based on cutting-edge technologies such as deep learning and Large Language Models (LLMs).

  • 40th Anniversary Celebration of Shanghai Computer Society
    QI Fenglin, SHEN Jiajie, WANG Maoyi, ZHANG Kai, WANG Xin
    Computer Engineering. 2025, 51(4): 1-14. https://doi.org/10.19678/j.issn.1000-3428.0070222

    The rapid development of Artificial Intelligence (AI) has empowered numerous fields and significantly impacted society, establishing a solid technological foundation for university informatization services. This study explores the historical development of both AI and university informatization by analyzing their respective trajectories and interconnections. Although universities worldwide may focus on different aspects of AI in their digital transformation efforts, they universally demonstrate vast potential of AI in enhancing education quality and streamlining management processes. Thus, this study focuses on five core areas: teaching, learning, administration, assessment, and examination. It comprehensively summarizes typical AI-empowered application cases to demonstrate how AI effectively improves educational quality and management efficiency. In addition, this study highlights the potential challenges associated with AI applications in university informatization, such as data privacy protection, algorithmic bias, and technology dependence. Furthermore, common strategies for addressing these issues such as enhancing data security, optimizing algorithm transparency and fairness, and fostering digital literacy among both teachers and students are elaborated upon in this study. Based on these analyses, the study explores future research directions for AI in university informatization, emphasizing the balance technological innovation and ethical standards. It advocates for the establishment of interdisciplinary collaboration mechanisms to promote the healthy and sustainable development of AI in the field of university informatization.

  • Mobile Internet and Communication Technology
    WANG Huahua, HUANG Yexia, LI Ling, WANG Jiacheng
    Computer Engineering. 2025, 51(12): 255-267. https://doi.org/10.19678/j.issn.1000-3428.0069877

    When implementing Federated Learning (FL) in a cell-free network environment, user scheduling and resource allocation strategies are crucial for optimizing system time overhead, improving user reachability, and accelerating FL convergence rate. To address the issue of uneven resource allocation, this study designs an optimization scheme that combines user scheduling, CPU processing frequency, and power allocation. This scheme aims to achieve fair resource allocation by maximizing the minimum user rate in the system, thus enhancing FL performance. The joint optimization problem is decomposed into two subproblems: user scheduling and power allocation. For user scheduling, this study proposes a greedy scheduling algorithm based on k-means clustering to comprehensively evaluate channel conditions and data "value" of users and categorize users into different groups. Subsequently, for the resource occupation situation, a personalized CPU processing frequency allocation plan is developed for users within each group based on their resource occupancy. Finally, by independently executing user scheduling within each group, user selection is performed efficiently and precisely, and the complexity of user selection is effectively reduced via early grouping. For power allocation, this study introduces a Bisection Method-based Power Allocation (BM-PA) algorithm. This algorithm not only considers fairness among users but also prioritizes resource-constrained users to ensure that they can obtain superior resource allocation. The BM-PA algorithm achieves fast convergence of power allocation using a low-complexity iterative optimization process, significantly improving the resource utilization efficiency without deteriorating the system performance. In this study, a reasonable user scheduling strategy serves as the foundation for obtaining optimal solutions for the power allocation subproblem. This study adopts an alternating iteration method that allows independent optimization in each subproblem while considering the solution of the other subproblem. Via multiple rounds of iterative optimization, this interdependent relationship ensures that power resources are reasonably allocated to users who need them the most or are most likely to effectively utilize them, thus enhancing the overall system performance. This study realizes joint optimization solutions that significantly improve overall system performance. Simulation results show that compared with the baseline algorithm, the proposed algorithm exhibits outstanding performance in terms of downlink achievable rates-the average improvement reaches up to 103.34% under optimal conditions. Additionally, the uplink achievable rates improve by up to 102.78%. Furthermore, the proposed algorithm can save 67.44% of the FL task training time on average compared to the baseline algorithm, particularly when the FL learning model accuracy reaches 90%, wherein the time overhead of the proposed algorithm is minimal.

  • Research Hotspots and Reviews
    CI Tianzhao, YANG Hao, ZHOU You, XIE Changsheng, WU Fei
    Computer Engineering. 2025, 51(3): 1-23. https://doi.org/10.19678/j.issn.1000-3428.0068673

    Smartphones have become an integral part of modern daily life. The Android operating system currently holds the largest market share in the mobile operating system market owing to its open-source nature and comprehensive ecosystem. Within Android smartphones, the storage subsystem plays a pivotal role, exerting a significant influence on the user experience. However, the design of Android mobile storage systems diverges from server scenarios, necessitating the consideration of distinct factors, such as resource constraints, cost sensitivity, and foreground application prioritization. Extensive research has been conducted in this area. By summarizing and analyzing the current research status in this field, we categorize the issues experienced by users of Android smartphone storage systems into five categories: host-side writing amplification, memory swapping, file system fragmentation, flash device performance, and I/O priority inversion. Subsequently, existing works addressing these five categories of issues are classified, along with commonly used tools for testing and analyzing mobile storage systems. Finally, we conclude by examining existing techniques that ensure the user experience with Android smartphone storage systems and discuss potential avenues for future investigation.

  • Artificial Intelligence and Pattern Recognition
    ZHOU Hanqi, FANG Dongxu, ZHANG Ningbo, SUN Wensheng
    Computer Engineering. 2025, 51(4): 57-65. https://doi.org/10.19678/j.issn.1000-3428.0069100

    Unmanned Aerial Vehicle (UAV) Multi-Object Tracking (MOT) technology is widely used in various fields such as traffic operation, safety monitoring, and water area inspection. However, existing MOT algorithms are primarily designed for single-UAV MOT scenarios. The perspective of a single-UAV typically has certain limitations, which can lead to tracking failures when objects are occluded, thereby causing ID switching. To address this issue, this paper proposes a Multi-UAV Multi-Object Tracking (MUMTTrack) algorithm. The MUMTTrack network adopts an MOT paradigm based on Tracking By Detection (TBD), utilizing multiple UAVs to track objects simultaneously and compensating for the perspective limitations of a single-UAV. Additionally, to effectively integrate the tracking results from multiple UAVs, an ID assignment strategy and an image matching strategy are designed based on the Speeded Up Robust Feature (SURF) algorithm for MUMTTrack. Finally, the performance of MUMTTrack is compared with that of existing widely used single-UAV MOT algorithms on the MDMT dataset. According to the comparative analysis, MUMTTrack demonstrates significant advantages in terms of MOT performance metrics, such as the Identity F1 (IDF1) value and Multi-Object Tracking Accuracy (MOTA).

  • AI-Enabled Vehicular Edge Computing
    QIN Minhao, SUN Weiwei
    Computer Engineering. 2025, 51(9): 1-13. https://doi.org/10.19678/j.issn.1000-3428.0069416

    Traffic signal control plays an important role in alleviating traffic congestion and improving urban commuting efficiency. In recent years, breakthroughs have been made in traffic signal control algorithms based on deep reinforcement learning using real-time traffic data as input. However, traffic data in real-world scenarios often involve data distortion. Traditional solutions use reinforcement learning algorithms to control signal lights after repairing distorted data. However, on the one hand, the dynamic phases of traffic signal introduces additional uncertainty to distortion repair, and on the other hand, distortion repair is difficult to combine with deep reinforcement learning frameworks to improve performance. To address these issues, a distorted traffic signal control model based on hidden state prediction, HCRL, is proposed. The HCRL model comprises encoding, control, and encoding prediction sub-models. By introducing a hidden state representation mechanism for signalized intersections, the HCRL model can adapt better to deep reinforcement learning frameworks and effectively express the control state of signalized intersections. In addition, the HCRL model uses a special transfer training method to avoid data distortion interference in the control sub-model. Two real datasets are used to verify the impact of data distortion on the intelligent signal light control algorithms. The experimental results show that the HCRL model outperforms the distortion-completion-based traffic signal control models in all distortion scenarios and distortion rates; further, it demonstrates strong robustness against data distortion when compared with other baseline models.

  • Research Hotspots and Reviews
    ZHANG Jin, CHEN Zhu, CHEN Zhaoyun, SHI Yang, CHEN Guanjun
    Computer Engineering. 2025, 51(7): 1-11. https://doi.org/10.19678/j.issn.1000-3428.0068870

    Simulators play an indispensable role in an array of scientific fields involving research and development. Particularly in architectural design, simulators provide a secure and cost-effective virtual environment, enabling researchers to conduct rapid experimental analyses and evaluations. Simultaneously, simulators facilitate the acceleration of the chip design and verification processes, thereby conserving time and reducing resource expenditure. However, with the evolutionary advances in processor architectural designs—specifically, the flourishing diversifications featured in dedicated processors—the key role played by simulators in providing substantial feedback for architectural design exploration has gained prominence. This discourse provides an overview of the current developments and applications of architectural simulators, accentuating a few illustrative examples. Analyzing the techniques employed by simulators dedicated to various processors allows for a deeper understanding of the focal points and technical complexities under different architectures. Moreover, this discourse deliberates speculative assessments and critiques of vital aspects of future architectural simulator developments, aspiring to forecast their prospects in the field of processor design research.

  • Computer Engineering. 2025, 51(2): 0-0.
  • Research Hotspots and Reviews
    PENG Long, GAO Yuanjun, LIU Xiaodong, YU Jie
    Computer Engineering. 2025, 51(10): 37-52. https://doi.org/10.19678/j.issn.1000-3428.0069708

    Advances in computational power and network technologies have driven robots toward miniaturization, swarm intelligence, and autonomous capabilities. Robot software deployed on robotic hardware must integrate diverse modules from low-level device drivers and controls to high-level motion planning and reasoning, resulting in increasingly complex architectures. A communication and programming framework for multi-robot systems—focusing on standardization, modularization, and platformization—can alleviate the complexity of programming robotic software. The development trends in robotic software and hardware architecture show that a swarm robotic system is a multi-domain, heterogeneous, and distributed system composed of computing nodes, actuators, sensors, and other hardware devices interconnected through wired or wireless networks. The heterogeneity of hardware devices makes it difficult to integrate software components into a single framework. This survey summarizes and analyzes existing robotic communication frameworks in terms of ease of use and portability, comparing their core features, such as programming models, heterogeneous hardware support, communication and coordination mechanisms between components, and programming languages. The survey then highlights the technical trends of advanced topics such as real-time virtualization, component orchestration, and fault tolerance. Moreover, this survey focuses on building a next-generation framework on a meta Operating System (OS) foundation, aiming to build a ubiquitous and integrated multi-robot software architecture for human-machine-object interactions.

  • Computer Engineering. 2025, 51(3): 0-0.
  • Computer Engineering. 2025, 51(5): 0-0.
  • Research Hotspots and Reviews
    DI Qinbo, CHEN Shaoli, SHI Liangren
    Computer Engineering. 2025, 51(11): 35-44. https://doi.org/10.19678/j.issn.1000-3428.0069780

    As multivariate time series data become increasingly prevalent across various industries, anomaly detection methods that can ensure the stable operation and security of systems have become crucial. Owing to the inherent complexity and dynamic nature of multivariate time series data, higher demands are placed on anomaly detection algorithms. To address the inefficiencies of existing anomaly detection methods in processing high-dimensional data with complex variable relations, this study proposes an anomaly detection algorithm for multivariate time series data, based on Graph Neural Networks (GNNs) and a diffusion model, named GRD. By leveraging node embedding and graph structure learning, GRD algorithm proficiently captures the relations between variables and refines features through a Gated Recurrent Unit (GRU) and a Denoising Diffusion Probabilistic Model (DDPM), thereby facilitating precise anomaly detection. Traditional assessment methods often employ a Point-Adjustment (PA) protocol that involves pre-scoring, substantially overestimating an algorithm's capability. To reflect model performance realistically, this work adopts a new evaluation protocol along with new metrics. The GRD algorithm demonstrates F1@k scores of 0.741 4, 0.801 7, and 0.767 1 on three public datasets. These results indicate that GRD algorithm consistently outperforms existing methods, with notable advantages in the processing of high-dimensional data, thereby underscoring its practicality and robustness in real-world anomaly detection applications.

  • Research Hotspots and Reviews
    LI Jiangxin, WANG Peng, WANG Wei
    Computer Engineering. 2025, 51(7): 47-58. https://doi.org/10.19678/j.issn.1000-3428.0069406

    Industrial time-series forecasting is critical for optimizing production processes and enhancing decision-making. Existing deep learning-based methods often underperform in this context due to a lack of domain knowledge. Prior studies have proposed using mechanistic models to guide deep learning; however, these approaches typically consider only a single mechanistic model, ignoring scenarios with multiple time-series prediction mechanisms in industrial processes and the inherent complexity of industrial time-series (e.g., multiscale dynamics and nonlinearity). To address this issue, this study proposes a Multi-Mechanism-guided Deep Learning for Industrial Time-series Forecasting (M-MDLITF) framework based on attention mechanisms. This framework embeds multiple mechanistic models into a deep industrial time-series prediction network to guide training and integrate the strengths of different mechanisms by focusing on final predictions. As an instantiation of the M-MDLITF, the Multi-mechanism Deep Wiener (M-DeepWiener) method employs contextual sliding windows and a Transformer-encoder architecture to capture complex patterns in industrial time-series. Experimental results from a simulated dataset and two real-world datasets demonstrate that M-DeepWiener achieves high computational efficiency and robustness. It significantly outperforms the single-mechanism Deep Wiener (DeepWiener), classical Wiener mechanistic models, and purely data-driven methods, reducing the prediction error by 20% compared to DeepWiener-M1 on the simulated dataset.

  • Research Hotspots and Reviews
    LU Yue, ZHOU Xiangyu, ZHANG Shizhou, LIANG Guoqiang, XING Yinghui, CHENG De, ZHANG Yanning
    Computer Engineering. 2025, 51(10): 1-17. https://doi.org/10.19678/j.issn.1000-3428.0070575

    Traditional machine learning algorithms perform well only when the training and testing sets are identically distributed. They cannot perform incremental learning for new categories or tasks that were not present in the original training set. Continual learning enables models to learn new knowledge adaptively while preventing the forgetting of old tasks. However, they still face challenges related to computation, storage overhead, and performance stability. Recent advances in pre-training models have provided new research directions for continual learning, which are promising for further performance improvements. This survey summarizes existing pre-training-based continual learning methods. According to the anti-forgetting mechanism, they are categorized into five types: methods based on prompt pools, methods with slow parameter updating, methods based on backbone branch extension, methods based on parameter regularization, and methods based on classifier design. Additionally, these methods are classified according to the number of phases, fine-tuning approaches, and use of language modalities. Subsequently, the overall challenges of continual learning methods are analyzed, and the applicable scenarios and limitations of various continual learning methods are summarized. The main characteristics and advantages of each method are also outlined. Comprehensive experiments are conducted on multiple benchmarks, followed by in-depth discussions on the performance gaps among the different methods. Finally, the survey discusses research trends in pre-training-based continual learning methods.

  • Artificial Intelligence and Pattern Recognition
    PENG Juhong, ZHANG Chi, GAO Qian, ZHANG Guangming, TAN Donghua, ZHAO Mingjun
    Computer Engineering. 2025, 51(7): 152-160. https://doi.org/10.19678/j.issn.1000-3428.0069283

    Steel surface defect detection technology in industrial scenarios is hindered by low detection accuracy and slow convergence speed. To address these issues, this study presents an improved YOLOv8 algorithm, namely a YOLOv8n-MDC. First, a Multi-scale Cross-fusion Network (MCN) is added to the backbone network. Establishing closer connections between the feature layers promotes uniform information transmission and reduces semantic information loss during cross-layer feature fusion, thereby enhancing the ability of the model to perceive steel defects. Second, deformable convolution is introduced in the module to adaptively change the shape and position of the convolution kernel, enabling a more flexible capture of the edge features of irregular defects, reducing information loss, and improving detection accuracy. Finally, a Coordinate Attention (CA) mechanism is added to embed position information into channel attention, solving the problem of position information loss and enabling the model to perceive the position and morphological features of defects, thereby enhancing detection precision and stability. Experimental results on the NEU-DET dataset show that the YOLOv8n-MDC algorithm achieves mAP@0.5 of 81.0%, which is 4.2 percentage points higher than that of the original baseline network. The algorithm has a faster convergence speed and higher accuracy; therefore, it meets the requirements of practical industrial production.

  • Research Hotspots and Reviews
    MA Hengzhi, QIAN Yurong, LENG Hongyong, WU Haipeng, TAO Wenbin, ZHANG Yiyang
    Computer Engineering. 2025, 51(2): 18-34. https://doi.org/10.19678/j.issn.1000-3428.0068386

    With the continuous development of big data and artificial intelligence technologies, knowledge graph embedding is developing rapidly, and knowledge graph applications are becoming increasingly widespread. Knowledge graph embedding improves the efficiency of knowledge representation and reasoning by representing structured knowledge into a low-dimensional vector space. This study provides a comprehensive overview of knowledge graph embedding technology, including its basic concepts, model categories, evaluation indices, and application prospects. First, the basic concepts and background of knowledge graph embedding are introduced, classifying the technology into four main categories: embedding models based on translation mechanisms, semantic- matching mechanisms, neural networks, and additional information. The core ideas, scoring functions, advantages and disadvantages, and application scenarios of the related models are meticulously sorted. Second, common datasets and evaluation indices of knowledge graph embedding are summarized, along with application prospects, such as link prediction and triple classification. The experimental results are analyzed, and downstream tasks, such as question-and-answer systems and recommenders, are introduced. Finally, the knowledge graph embedding technology is reviewed and summarized, outlining its limitations and the primary existing problems while discussing the opportunities and challenges for future knowledge graph embedding along with potential research directions.

  • Artificial Intelligence and Pattern Recognition
    YUAN Yinghua, JIN Yingran, GAO Yun
    Computer Engineering. 2025, 51(12): 96-108. https://doi.org/10.19678/j.issn.1000-3428.0069871

    The Siamese tracking network is a popular target tracking framework that includes three modules: backbone, fusion, and positioning networks. The Transformer is a relatively new and effective implementation method for fusion network modules. The encoder and decoder of the Transformer use a self-attention mechanism to enhance the features of the Convolutional Neural Network (CNN). However, the self-attention mechanism can only enhance features in the spatial dimension without considering feature enhancement in the channel dimension. To enable the self-attention network of the Transformer to enhance features both in the spatial and channel dimensions and provide accurate correlation information for the target localization network, a Transformer tracker based on dual-dimensional feature enhancement is proposed to improve the Transformer fusion network. First, using the third- and fourth-stage features of the backbone network as inputs, channel dimension feature enhancement is performed via CAE-Net in the self-attention module of the Transformer encoder and decoder to enhance the importance of the channel. Subsequently, two-stage feature-weighted fusion and linear transformation are performed via SAE-Net to obtain the self-attention factors Q, K, and V. Finally, spatial dimension feature enhancement is performed via a self-attention operation. Experiments conducted on five widely used public benchmark datasets reveal that the improved Transformer feature fusion module can improve the tracking performance of the tracker with minimal reduction in speed of tracking.

  • Research Hotspots and Reviews
    LIU Yanghong, FU Yangyouran, DONG Xingping
    Computer Engineering. 2025, 51(10): 18-26. https://doi.org/10.19678/j.issn.1000-3428.0070569

    The generation of High-Definition (HD) environmental semantic maps is indispensable for environmental perception and decision making in autonomous driving systems. To address the modality discrepancy between cameras and LiDARs in perception tasks, this paper proposes an innovative multimodal fusion framework, HDMapFusion, which significantly improves semantic map generation accuracy via feature-level fusion. Unlike traditional methods that directly fuse raw sensor data, our approach innovatively transforms both camera images and LiDAR point cloud features into a unified Bird's-Eye-View (BEV) representation, enabling physically interpretable fusion of multimodal information within a consistent geometric coordinate system. Specifically, this method first extracts visual features from camera images and 3D structural features from LiDAR point clouds using deep learning networks. Subsequently, a differentiable perspective transformation module converts the front-view image features into a BEV space and the LiDAR point clouds are projected into the same BEV space through voxelization. Building on this, an attention-based feature fusion module is designed to adaptively integrate the two modalities using weighted aggregation. Finally, a semantic decoder generates high-precision semantic maps containing lane lines, pedestrian crossings, road boundary lines, and other key elements. Systematic experiments conducted on the nuScenes benchmark dataset demonstrate that HDMapFusion significantly outperforms existing baseline methods in terms of HD map generation accuracy. These results validate the effectiveness and superiority of the proposed method, offering a novel solution to multimodal fusion in autonomous driving perception.

  • Graphics and Image Processing
    WANG Shumeng, XU Huiying, ZHU Xinzhong, HUANG Xiao, SONG Jie, LI Yi
    Computer Engineering. 2025, 51(9): 280-293. https://doi.org/10.19678/j.issn.1000-3428.0069353

    In Unmanned Aerial Vehicle (UAV) aerial photography, targets are usually small targets with dense distribution and unobvious features, and the object scale varies greatly. Therefore, the problems of missing detection and false detection are easy to occur in object detection. In order to solve these problems, a lightweight small object detection algorithm based on improved YOLOv8n, namely PECS-YOLO, is proposed for aerial photography. By adding P2 small object detection layer in the Neck part, the algorithm combines shallow and deep feature maps to better capture details of small targets. A lightweight convolution, namely PartialConv, is introduced to a new structure of Cross Stage Partial PartialConv (CSPPC), to replace Concatenation with Fusion (C2f) in the Neck network to realized lightweight of the model. By using a model of Spatial Pyramid Pooling with Efficient Layer Aggregation Network (SPPELAN), small object features can be captured effectively. By adding Squeeze-and-Excitation (SE)attention mechanism in front of each detection head in the Neck part, the network can better focus on useful channels and reduce the interference of background noise on small object detection tasks in complex environments. Finally, EfficiCIoU is used as the boundary frame loss function, and the shape difference of the boundary frame is also taken into account, which enhances the detection ability of the model for small targets. Experimental results show that, compared YOLOv8n, the mean Average Precision at Intersection over Union (IoU) of 0.5 (mAP@0.5) and the mean Average Precision at IoU of 0.5∶0.95 (mAP@0.5∶0.95) of PECS-YOLO object detection algorithm on VisDrone2019-DET dataset are increased by 3.5% and 3.7% respectively, the number of parameters is reduced by about 25.7%, and detection speed is increased by about 65.2%. In summary, PECS-YOLO model is suitable for small object detection in UAV aerial photography.

  • Graphics and Image Processing
    ZHAO Hong, SONG Furong, LI Wengai
    Computer Engineering. 2025, 51(2): 300-311. https://doi.org/10.19678/j.issn.1000-3428.0068481

    Adversarial examples are crucial for evaluating the robustness of Deep Neural Network (DNN) and revealing their potential security risks. The adversarial example generation method based on a Generative Adversarial Network (GAN), AdvGAN, has made significant progress in generating image adversarial examples; however, the sparsity and amplitude of the perturbation generated by this method are insufficient, resulting in lower authenticity of adversarial examples. To address this issue, this study proposes an improved image adversarial example generation method based on AdvGAN, Squeeze-and-Excitation (SE)-AdvGAN. SE-AdvGAN improves the sparsity of perturbation by constructing an SE attention generator and an SE residual discriminator. The SE attention generator is used to extract the key features of an image and limit the position of perturbation generation. The SE residual discriminator guides the generator to avoid generating irrelevant perturbation. Moreover, a boundary loss based on l2 norm is added to the loss function of the SE attention generator to limit the amplitude of perturbation, thereby improving the authenticity of adversarial examples. The experimental results indicate that in the white box attack scenario, the SE-AdvGAN method has higher sparsity and smaller amplitude of adversarial example perturbation compared to existing methods and achieves better attack performance on different target models. This indicates that the high-quality adversarial examples generated by SE-AdvGAN can more effectively evaluate the robustness of DNN.

  • Research Hotspots and Reviews
    JIANG Qiqi, ZHANG Liang, PENG Lingqi, KAN Haibin
    Computer Engineering. 2025, 51(3): 24-33. https://doi.org/10.19678/j.issn.1000-3428.0069378

    With the advent of the big data era, the proliferation of information types has increased the requirements for controlled data sharing. Decentralized Attribute-Based Encryption (DABE) has been widely studied in this context to enable fine-grained access control among multiple participants. However, the Internet of Things (IoT) data sharing scenario has become mainstream and requires more data features, such as cross-domain access, transparency, trustworthiness, and controllability, whereas traditional Attribute-Based Encryption (ABE) schemes pose a computational burden on resource-constrained IoT devices. To solve these problems, this study proposes an accountable and verifiable outsourced hierarchical attribute-based encryption scheme based on blockchain to support cross-domain data access and improve the transparency and trustworthiness of data sharing using blockchain. By introducing the concept of Verifiable Credential (VC), this scheme addresses the issue of user identity authentication and distributes the burden of complex encryption and decryption processes to outsourced computing nodes. Finally, using a hierarchical structure, fine-grained data access control is achieved. A security analysis has demonstrated that the proposed scheme can withstand chosen-plaintext attacks. Simulation results on small IoT devices with limited resources using Docker have shown that the proposed scheme has a lower computational overhead than existing schemes. For up to 30 attributes, the computation costs have not exceeded 2.5 s for any of the algorithms, and the average cost is approximately 1 s, making the scheme suitable for resource-constrained IoT devices.

  • PU Zhenyu, LIU Zhiwei, HUANG Bo, HE Shufeng, CHEN Nanxi, HAO Wenzeng
    Accepted: 2025-04-25
    In the modern industrial sector, the perception and analysis of text data have become essential for promoting intelligent manufacturing and optimizing production processes. However, industrial text data is typically characterized by high specialization, diversity, and complexity, along with high annotation costs, making traditional large-scale annotation methods unsuitable. Existing few-shot named entity recognition(NER) methods often use prototypical networks to classify entities, where the prototype is the average of the features of all samples belonging to the same category. These methods, however, are highly sensitive to support set data and prone to sample selection bias. To address this, we propose a few-shot named entity recognition model based on distribution calibration—DC-NER(Distribution Calibration-based Named Entity Recognition). The model innovatively decomposes the task into two phases: span detection and entity classification. During the entity classification phase, a precise distance measurement function is employed to identify similar categories between the source domain and the target domain. Based on this, the distribution of samples in the target domain is corrected to generate more accurate class prototypes. Experimental results on both in-domain dataset (Few-NERD) and cross-domain dataset (Cross-NER) demonstrate that DC-NER significantly outperforms comparative models in terms of F1 score, validating its effectiveness in few-shot named entity recognition.
  • Artificial Intelligence and Pattern Recognition
    HUANG Kun, QI Zhaojian, WANG Juanmin, HU Qian, HU Weichao, PI Jianyong
    Computer Engineering. 2025, 51(5): 133-142. https://doi.org/10.19678/j.issn.1000-3428.0069026

    Pedestrian detection in crowded scenes is a key technology in intelligent monitoring of public space. It enables the intelligent monitoring of crowds, using object detection methods to detect the positions and number of pedestrians in videos. This paper presents Crowd-YOLOv8, an improved version of the YOLOv8 detection model, to address the issue of pedestrians being easily missed owing to occlusion and small target size in densely populated areas. First, nostride-Conv-SPD is introduced into the backbone network to enhance its capability of extracting fine-grained information, such as small object features in images. Second, small object detection heads and the CARAFE upsampling operator are introduced into the neck part of the YOLOv8 network to fuse features at different scales and improve the detection performance in the case of small targets. Experimental results demonstrate that the proposed method achieves an mAP@0.5 of 84.3% and an mAP@0.5∶0.95 of 58.2% on a CrowdedHuman dataset, which is an improvement of 3.7 and 5.2 percentage points, respectively, compared to those of the original YOLOv8n. On the WiderPerson dataset, the proposed method achieves an mAP@0.5 of 88.4% and an mAP@0.5∶0.95 of 67.4%, which is an improvement of 1.1 and 1.5 percentage points compared to those of the original YOLOv8n.

  • Development Research and Engineering Application
    ZHANG Boqiang, CHEN Xinming, FENG Tianpei, WU Lan, LIU Ningning, SUN Peng
    Computer Engineering. 2025, 51(4): 373-382. https://doi.org/10.19678/j.issn.1000-3428.0068338

    This paper proposes a path-planning method based on hybrid A* and modified RS curve fusion to address the issue of unmanned transfer vehicles in limited scenarios being unable to maintain a safe distance from surrounding obstacles during path planning, resulting in collisions between vehicles and obstacles. First, a distance cost function based on the KD Tree algorithm is proposed and added to the cost function of the hybrid A* algorithm. Second, the expansion strategy of the hybrid A* algorithm is changed by dynamically changing the node expansion distance based on the surrounding environment of the vehicle, achieving dynamic node expansion and improving the algorithm's node search efficiency. Finally, the RS curve generation mechanism of the hybrid A* algorithm is improved to make the straight part of the generated RS curve parallel to the boundary of the surrounding obstacles to meet the requirements of road driving in the plant area. Subsequently, the local path is smoothed to ensure that it meets the continuity of path curvature changes under the conditions of vehicle kinematics constraints to improve the quality of the generated path. The experimental results show that, compared with traditional algorithms, the proposed algorithm reduces the search time by 38.06%, reduces the maximum curvature by 25.2%, and increases the closest distance from the path to the obstacle by 51.3%. Thus, the proposed method effectively improves the quality of path generation of the hybrid A* algorithm and can operate well in limited scenarios.

  • Graphics and Image Processing
    LIU Shengjie, HE Ning, WANG Xin, YU Haigang, HAN Wenjing
    Computer Engineering. 2025, 51(2): 278-288. https://doi.org/10.19678/j.issn.1000-3428.0068375

    Human pose estimation is widely used in multiple fields, including sports fitness, gesture control, unmanned supermarkets, and entertainment games. However, pose-estimation tasks face several challenges. Considering the current mainstream human pose-estimation networks with large parameters and complex calculations, LitePose, a lightweight pose-estimation network based on a high-resolution network, is proposed. First, Ghost convolution is used to reduce the parameters of the feature extraction network. Second, by using the Decoupled Fully Connected (DFC) attention module, the dependence relationship between pixels in the far distance space position is better captured and the loss in feature extraction due to decrease in parameters is reduced. The accuracy of human pose keypoint regression is improved, and a feature enhancement module is designed to further enhance the features extracted by the backbone network. Finally, a new coordinate decoding method is designed to reduce the error in the heatmap decoding process and improve the accuracy of keypoint regression. LitePose is validated on the human critical point detection datasets COCO and MPII and compared with current mainstream methods. The experimental results show that LitePose loses 0.2% accuracy compared to the baseline network HRNet; however, the number of parameters is less than one-third of the baseline network. LitePose can significantly reduce the number of parameters in the network model while ensuring minimal accuracy loss.

  • Artificial Intelligence and Pattern Recognition
    DENG Zexian, ZHANG Yungui, ZHANG Lin
    Computer Engineering. 2025, 51(5): 154-165. https://doi.org/10.19678/j.issn.1000-3428.0069143

    Multi-dimensional time series classification is widely used in industry, medical treatment, finance and other fields; it plays an important role in industrial product quality control, disease prediction, financial risk control and so on. Aiming at the problem that time dependence and spatial dependence of multi-dimensional time series are equally important, and that traditional multi-dimensional time series models only focus on a certain dimension of time or space, this paper proposes a multi-dimensional time series classification model based on the pre-trained recursive Transformer-Mixer PRTMMTSC. The model is based on a Transformer-Mixer module that can fully learn the temporal and spatial correlations of multi-dimensional time series. To further improve the classification performance, inspired by the anomaly detection model, the proposed model combines the pre-trained hidden layer features and the residual features, and uses the PolyLoss loss function for training. To reduce the number of model training parameters, the Transformer-Mixer module in the model is constructed recursively, so that the number of multi-layer trainable parameters is only the number of single-layer Transformer-Mixer parameters. The experimental results on the UEA datasets show that the performance of the proposed model is better than that of the contrast models. Compared with the TARNet model and the RLPAM model, the accuracy of proposed model has increased by 3.03% and 4.69%, respectively. Ablation experiments on the UEA and the IF steel inclusions defect classification further illustrate the effectiveness of the proposed pre-trained method, Transformer-Mixer module, residual information, and the PolyLoss loss function.

  • Artificial Intelligence and Pattern Recognition
    DAI Kangjia, XU Huiying, ZHU Xinzhong, LI Xiyu, HUANG Xiao, CHEN Guoqiang, ZHANG Zhixiong
    Computer Engineering. 2025, 51(3): 95-104. https://doi.org/10.19678/j.issn.1000-3428.0068950

    Traditional vision Simultaneous Localization And Mapping(SLAM) systems are based on the assumption of a static environment. However, real scenes often have dynamic objects, which may lead to decreased accuracy, deterioration of robustness, and even tracking loss in SLAM position estimation and map construction. To address these issues, this study proposes a new semantic SLAM system, named YGL-SLAM, based on ORB -SLAM2. The system first uses a lightweight target detection algorithm named YOLOv8n, to track dynamic objects and obtain their semantic information. Subsequently, both point and line features are extracted from the tracking thread, and the dynamic features are culled based on the acquired semantic information using the Z-score and parapolar geometry algorithms to improve the performance of SLAM in dynamic scenes. Given that lightweight target detection algorithms suffer from missed detection in consecutive frames when tracking dynamic objects, this study designs a detection compensation method based on neighboring frames. Testing on the public datasets TUM and Bonn reveals that YGL-SLAM system improves detection performance by over 90% compared to ORB-SLAM2, while demonstrating superior accuracy and robustness compared to other dynamic SLAM.

  • Research Hotspots and Reviews
    ZHAO Kai, HU Yuhuan, YAN Junqiao, BI Xuehua, ZHANG Linlin
    Computer Engineering. 2025, 51(8): 1-15. https://doi.org/10.19678/j.issn.1000-3428.0069147

    Blockchain, as a distributed and trusted database, has gained significant attention in academic and industrial circles for its effective application in the domain of digital copyright protection. Traditional digital copyright protection technologies suffer from issues such as difficulties in tracking infringements, complexities in copyright transactions, and inadequate protection of legitimate rights, which severely hampering the development of digital copyright protection endeavors. The immutability, traceability, and decentralization inherent in blockchain technology provide a highly reliable, transparent, and secure solution to mitigate the risks associated with digital copyright infringement. This overview starts with an introduction to the fundamental principles of blockchain technology. Then, it discusses the latest research findings on the integration of blockchain with traditional copyright protection technologies to address the problems inherent in traditional copyright protection schemes. Further, an evaluation of the practical applications and potential of blockchain is conducted, emphasizing its positive impact on the copyright protection ecosystem. Finally, this overview delves into the challenges and future trends related to blockchain based copyright protection, ultimately aiming to establish a more robust and sustainable blockchain copyright protection system.

  • Research Hotspots and Reviews
    MAO Jingzheng, HU Xiaorui, XU Gengchen, WU Guodong, SUN Yanbin, TIAN Zhihong
    Computer Engineering. 2025, 51(2): 1-17. https://doi.org/10.19678/j.issn.1000-3428.0068374

    Industrial Control System (ICS) that utilizes Digital Twin (DT) technology plays a critical role in enhancing system security, ensuring stable operations, and optimizing production efficiency. The application of DT technology in the field of industrial control security primarily focuses on two key areas: security situation awareness and industrial cyber ranges. DT-based security situation awareness facilitates real-time monitoring, anomaly detection, vulnerability analyses, and threat identification while enabling a visualized approach to managing system security. Similarly, industrial cyber ranges powered by DT technology act as strategy validation platforms, supporting attack-defense simulations for ICSs, assessing the effectiveness of security strategies, enhancing the protection of critical infrastructure, and providing robust training support for personnel. This study analyzes the current security landscape of ICS and advancements in applying DT technology to enhance ICS security situation awareness, with particular emphasis on the technology's contributions to risk assessment. Furthermore, the study explores the optimization capabilities of the DT-based industrial cyber ranges for bolstering ICS security. Through a case study of intelligent power grids, this study validates the critical role of DT technology in ICS security. Finally, the study discusses future directions for the development of DT technology within the ICS security domain.

  • Space-Air-Ground Integrated Computing Power Networks
    LI Bin, SHAN Huimin
    Computer Engineering. 2025, 51(5): 1-8. https://doi.org/10.19678/j.issn.1000-3428.0069423

    To address the challenges of insufficient computing capacity of end users and the unbalanced distribution of computing power among edge nodes in computing power networks, this study proposes an Unmanned Aerial Vehicle (UAV)-assisted Device-to-Device (D2D) edge computing solution based on incentive mechanisms. First, under constraints involving computing resources, transmission power, and the unit pricing of computing resources, a unified optimization problem is formulated to maximize system revenue. This problem aims to optimize the task offloading ratio, computing resource constraints, UAV trajectory, as well as the transmission power and unit pricing of computing resources for both UAVs and users. The Proximal Policy Optimization (PPO) algorithm is employed to establish user offloading and purchasing strategies. In addition, an iterative strategy is implemented at each time step to solve the optimization problem and obtain the optimal solution. The simulation results demonstrate that the PPO-based system revenue maximization algorithm exhibits superior convergence and improves overall system revenue compared to the baseline algorithm.

  • Artificial Intelligence and Pattern Recognition
    WU Donghui, WANG Jinfeng, QIU Sen, LIU Guozhi
    Computer Engineering. 2025, 51(8): 107-119. https://doi.org/10.19678/j.issn.1000-3428.0070202
    Sign language recognition has received widespread attention in recent years. However, existing sign language recognition models face challenges, such as long training times and high computational costs. To address this issue, this study proposes a hybrid deep learning method that integrates an attention mechanism with an Expanded Wide-kernel Deep Convolutional Neural Network (EWDCNN) and a Bidirectional Long Short-Term Memory (BiLSTM) network based on data obtained from a wearable data glove, EWBiLSTM-ATT model. First, by widening the first convolutional layer, the model parameter count is reduced, which enhances computational speed. Subsequently, by deepening the EWDCNN convolutional layers, the model's ability to automatically extract features from sign language is improved. Second, BiLSTM is introduced as a temporal model to capture the dynamic temporal information of sign language sequential data, effectively handling temporal relationships in the sensor data. Finally, the attention mechanism is employed to map the weighted sum and learn a parameter matrix that assigns different weights to the hidden states of BiLSTM, allowing the model to automatically select key time segments related to gesture actions by calculating the attention weights for each time step. This study uses the STM32F103 as the main control module and builds a data glove sign language acquisition platform with MPU6050 and Flex Sensor 4.5 sensors as the core components. Sixteen dynamic sign language actions are selected to construct the GR-Dataset data training model. Under the same experimental conditions, compared to the CLT-net, CNN-GRU, CLA-net, and CNN-GRU-ATT models, the recognition rate of the EWBiLSTM-ATT model is 99.40%, which is increased by 10.36, 8.41, 3.87, and 3.05 percentage points, respectively. Further, the total training time is reduced to 57%, 61%, 55%, and 56% of the comparison models, respectively.
  • Research Hotspots and Reviews
    PANG Xin, GE Fengpei, LI Yanling
    Computer Engineering. 2025, 51(6): 1-19. https://doi.org/10.19678/j.issn.1000-3428.0069005

    Acoustic Scene Classification (ASC) aims to enable computers to simulate the human auditory system in the task of recognizing various acoustic environments, which is a challenging task in the field of computer audition. With rapid advancements in intelligent audio processing technologies and neural network learning algorithms, a series of new algorithms and technologies for ASC have emerged in recent years. To comprehensively present the technological development trajectory and evolution in this field, this review systematically examines both early work and recent developments in ASC, providing a thorough overview of the field. This review first describes application scenarios and the challenges encountered in ASC and then details the mainstream frameworks in ASC, with a focus on the application of deep learning algorithms in this domain. Subsequently, it systematically summarizes frontier explorations, extension tasks, and publicly available datasets in ASC and finally discusses the prospects for future development trends in ASC.

  • Research Hotspots and Reviews
    XU Yuanbo, REN Jing, WANG Liang, FU Ning, YU Zhiwen
    Computer Engineering. 2025, 51(2): 54-64. https://doi.org/10.19678/j.issn.1000-3428.0069749

    In light of the dynamic nature of user requirements in edge computing networks, as well as the communication congestion stemming from several users offloading tasks, this study proposes an admission control mechanism for an Unmanned Aerial Vehicle (UAV)-assisted edge computing system. The aim is to maximize service provider revenue while maintaining Quality of Service (QoS) for users. First, a server communication threshold structure is established based on factors such as user channel quality and base station communication bandwidth, mitigating excessively high transmission delays for tasks. Users without a connection to a base station can opt to offload tasks to a UAV or process them directly on their terminal devices. Second, an optimal threshold for UAV task reception is determined considering the limited resources and operating costs of UAVs. UAVs perform preprocessing operations on tasks and offload the preprocessed tasks to the base station to reduce task- processing delays. This stage is modeled as a birth and death process, with matrix geometry methods employed to derive the probability distribution of the system's stable state and the expected benefits for users. Subsequently, the optimal UAV task reception threshold is determined, optimal prices are set, and the UAV revenue is maximized under high task concurrency conditions. The simulation results demonstrate the significant advantages of the proposed solution algorithm in terms of revenue of service providers and user QoS.

  • Artificial Intelligence and Pattern Recognition
    WANG Shuai, SHI Yancui
    Computer Engineering. 2025, 51(8): 190-202. https://doi.org/10.19678/j.issn.1000-3428.0069636

    The sequence recommendation algorithm dynamically models the user's historical behavior to predict the content they may be interested in. This study focuses on the application of contrastive Self Supervised Learning (SSL) in sequence recommendation, enhancing the model's representation ability in sparse data scenarios by designing effective self supervised signals. First, a personalized data augmentation method incorporating user preferences is proposed to address the issue of noise introduced by random data augmentation. This method guides the augmentation process based on user ratings and combines different augmentation methods for short and long sequences to generate augmented sequences that align with user preferences. Second, a mixed-augmentation training approach is designed to address the issue of imbalanced feature learning during training. In the early stages of training, augmentation sequences are generated using randomly selected methods to enhance the model performance and generalization. In the later stages, augmentation sequences with high similarity to the original sequences are selected to enable the model to comprehensively learn the actual preferences and behavior patterns of users. Finally, traditional sequence prediction objectives are combined with SSL objectives to infer user representations. Experimental verification is performed using the Beauty, Toys, and Sports datasets. Compared with the best result in the baseline model, the HR@5 indicator of the proposed method increases by 6.61%, 3.11%, and 3.76%, and the NDCG@5 indicator increases by 11.40%, 3.50%, and 2.16%, respectively, for the aforementioned datasets. These experimental results confirm the rationality and validity of the proposed method.

  • Artificial Intelligence and Pattern Recognition
    CHANG Ru, LIU Yujie, SUN Haojie, DONG Liwei
    Computer Engineering. 2025, 51(9): 110-119. https://doi.org/10.19678/j.issn.1000-3428.0069711

    Aiming at non-affine nonlinear multi-Agent systems with full-state constraints, this study investigates an event-triggered formation control strategy with prescribed performance. The study proposes a barrier function-based nonlinear mapping technique to transform full-state constraints into the boundedness of mapped variables, thereby eliminating feasibility conditions in the controller design. Then, it introduces a shift function and a prescribed time-convergent performance function to constrain the formation tracking error. Consequently, the restriction that the initial value of the formation tracking error must be within the performance constraint range is eliminated, thus improving formation performance. The study also designs an event-triggered prescribed performance formation controller to guarantee that Agents achieve the desired formation within a prescribed time and maintain it thereafter, while significantly reducing controller—actuator signal transmissions. Lyapunov stability analysis proves that all signals in the system are semi-globally, uniformly, and ultimately bounded. The theoretical analysis rules out the possibility of Zeno behavior occurring. Finally, numerical simulations verify the effectiveness of the proposed method.

  • Graphics and Image Processing
    WANG Guoming, JIA Daiwang
    Computer Engineering. 2025, 51(12): 294-303. https://doi.org/10.19678/j.issn.1000-3428.0070027

    Deep learning-based object detection has significantly improved the detection of medium and large targets. However, when detecting small objects, traditional algorithms often face challenges such as missed detections and false positives owing to the inherent issues of small scale and complex backgrounds. Therefore, this study aims to enhance the accuracy of small object detection by improving the YOLOv8 model. First, the convolutional module in the backbone is replaced with the RFAConv module, which enhances the ability of the model to process complex images. Second, a Mixed Local Channel Attention (MLCA) mechanism is introduced in the neck part, allowing the model to fuse features from different layers more efficiently while maintaining computational efficiency. Third, the Detect head of YOLOv8 is replaced with the Detect_FASFF head to address the inconsistency between different feature scales and improve the ability of the model to detect small objects. Finally, the Complete Intersection over Union (CIoU) loss function is replaced with the Focaler-IoU loss function, enabling the model to focus more on small objects that are difficult to locate precisely. Experimental results show that the improved model increases mAP@0.5 by 4.8 percentage points and mAP@0.5:0.95 by 3.0 percentage points on the FloW-Img dataset, which is sparse in small objects. On the VisDrone2019 dataset which has a high density of small objects, mAP@0.5 increases by 5.9 percentage points and mAP@0.5:0.95 improves by 4.0 percentage points. In addition, generalization comparison experiments are conducted on the low-altitude dataset AU-AIR and the pedestrian-dense detection dataset WiderPerson. The optimized model significantly improves the accuracy of small object detection compared with the original model and expands its applicability.

  • Graphics and Image Processing
    HU Qian, PI Jianyong, HU Weichao, HUANG Kun, WANG Juanmin
    Computer Engineering. 2025, 51(3): 216-228. https://doi.org/10.19678/j.issn.1000-3428.0068753

    Considering the problem of low accuracy in existing pedestrian detection methods for dense or small target pedestrians, this study proposes a comprehensive improved algorithm model called YOLOv5_Conv-SPD_DAFPN based on You Only Look Once (YOLO) v5, a non-strided Convolution Space-to-Depth (Conv-SPD), and Double Asymptotic Feature Pyramid Network (DAFPN). First, to address the issue of feature information loss for small targets or dense pedestrians, a Conv-SPD network module is introduced into the backbone network, to replace the original skip convolution, thereby effectively mitigating the problem of feature information loss. Second, to solve the problem of low feature fusion rates caused by nonadjacent feature maps not directly merging, this study proposes DAFPN to significantly improve the accuracy and precision of pedestrian detection. Finally, based on Efficient Intersection over Union (EIoU) and Complete-IoU (CIoU) losses, this study introduces the EfficiCIoU_Loss location loss function to adjust and accelerate the frame regression rate, thereby promoting faster convergence of the network model. The algorithm model improved mAP@0.5 and mAP@0.5∶0.95 by 3.9, 5.3 and 2.1, 2.1 percentage points, respectively, compared to the original YOLOv5 model on the CrowdHuman and WiderPerson pedestrian datasets. After introducing EfficiCIoU_Loss, the model convergence speed improved by 11% and 33%, respectively. These innovative improvements have led to significant progress in dense pedestrian detection based on YOLOv5 in terms of feature information retention, multiscale fusion, and loss function optimization, thereby enhancing performance and efficiency in practical applications.

  • Computer Architecture and Software Technology
    ZHANG Ming, GUO Wenkang, WANG Haifeng
    Computer Engineering. 2025, 51(3): 197-207. https://doi.org/10.19678/j.issn.1000-3428.0068477

    Graphics Processing Unit (GPU) is not fully utilized when processing large-scale dynamic graphs, and the limitations of GPU-oriented graph partitioning methods lead to performance bottlenecks. To improve the performance of graph computing, a Central Processing Unit (CPU)/GPU Distributed Heterogeneous Engine (DH-Engine) is proposed to improve the performance of heterogeneous processors. First, a new heterogeneous graph partitioning algorithm is proposed. It uses a streaming algorithm for graph partitioning as the core to achieve dynamic load balancing between the computing nodes and between the CPU and GPU. The greedy strategy assigns vertices based on the maximum number of neighboring vertices during the initial graph partitioning and dynamically adjusts the vertex position based on the minimum number of connected edges during the iteration. Second, the system introduces a GPU heterogeneous computing model to improve graph computing efficiency through functional parallelism. The experiment used PageRank, Connected Components(CC), Single-Source Shortest Path(SSSP), and k-core as examples to conduct comparative experiments with other graph computing systems. Compared with other graph engines, DH-Engine can better balance the computing load of each node and the load between heterogeneous processors to shorten the delay and accelerate the overall computing speed. The results show that the CPU/GPU synergy of this system tends to 1, and the heterogeneous computing has speedup ratio of 5 times compared to other graph computing systems. DH-Engine provides an improved heterogeneous graph scheme.

  • Development Research and Engineering Application
    TANG Jingwen, LAI Huicheng, WANG Tongguan
    Computer Engineering. 2025, 51(4): 303-313. https://doi.org/10.19678/j.issn.1000-3428.0068897

    Pedestrian detection in intelligent community scenarios needs to accurately recognize pedestrians to address various situations. However, for persons who are occluded or at long distances, existing detectors exhibit problems such as missed detection, detection error, and large models. To address these problems, this paper proposes a pedestrian detection algorithm, Multiscale Efficient-YOLO (ME-YOLO), based on YOLOv8. An efficient feature Extraction Module (EM) is designed to improve network learning and capture pedestrian features, which reduces the number of network parameters and improves detection accuracy. The reconstructed detection head module reintegrates the detection layer to enhance the network's ability to recognize small targets and effectively detect small target pedestrians. A Bidirectional Feature Pyramid Network (BiFPN) is introduced to design a new neck network, namely the Bidirectional Dilated Residual-Feature Pyramid Network (BDR-FPN), and the expanded residual module and weighted attention mechanism expand the receptive field and learn pedestrian features with emphasis, thereby alleviating the problem of network insensitivity to occluded pedestrians. Compared with the original YOLOv8 algorithm, ME-YOLO increases the AP50 by 5.6 percentage points, reduces the number of model parameters by 41%, and compresses the model size by 40% after training and verification based on the CityPersons dataset. ME-YOLO also increases the AP50 by 4.1 percentage points and AP50∶95 by 1.7 percentage points on the TinyPerson dataset. Moreover, the algorithm significantly reduces the number of model parameters and model size and effectively improves detection accuracy. This method has a considerable application value in intelligent community scenarios.

  • Development Research and Engineering Application
    CUI Jinrong, YE Weihao, ZHENG Hong, LIU Tonglai, QI Long, XU Yong
    Computer Engineering. 2025, 51(3): 320-333. https://doi.org/10.19678/j.issn.1000-3428.0070049

    Complex environments, such as green algae, that interfere with the counting of microscopic rice seedlings are often encountered in the early stages of rice cultivation, making it difficult to distinguish microscopic rice seedlings from the background, which can in turn degrade the performance of detection and counting models. However, current general-purpose deep learning methods face challenges in detecting tiny seedlings in complex cross-domain scenarios. Therefore, this paper proposes a domain-adaptive Normalized Gaussian Wasserstein Distance (NWD)-YOLOv5 model based on Mean Teacher to solve the problem of counting tiny rice seedlings from the perspective of an Unmanned Aerial Vehicle (UAV). To improve the detection and counting ability of tiny seedlings in complex backgrounds, a semi-supervised domain-adaptive training strategy based on the Mean Teacher model is integrated into the YOLOv5 network. Furthermore, as the loss function of YOLOv5, a prediction box metric based on NWD is used to improve the accuracy of positive and negative sample assignment for tiny objects. Experimental results show that the improved model has better generalizability compared with the original YOLOv5 model. The mAP@0.5 increases from 60.0% to 95.9%. Compared with other object detection models, the proposed domain adaptive model has greater advantages. Compared with the traditional manual method, the designed rice seedling counting method has an accuracy of 98.6%, achieves an R2 value of 0.900 3, and requires only one-fifth of the counting time required by the manual method. Ablation experiments show that the proposed domain-adaptive model achieves a performance that is comparable to that of Oracle, a supervised learning method, and is significantly superior to that of Source Only, a baseline method. This study provides insights to improve the accuracy of rice plant counting in complex and variable application environments and can serve as technical support for rice crop management methods.

  • Research Hotspots and Reviews
    YUAN Yajian, MAO Li
    Computer Engineering. 2025, 51(3): 54-63. https://doi.org/10.19678/j.issn.1000-3428.0069042

    Traffic sign detection is crucial for assisted driving and plays a vital role in ensuring driving safety. However, in real-world traffic environments, factors such as darkness and rain create background noise that complicates the detection process. In addition, existing models often struggle to effectively detect small traffic signs from a distance. Furthermore, when a traffic sign detection model is designed, the model size must be considered for practical deployment. To address these challenges, this study proposes a lightweight traffic sign detection model based on YOLOv8 with enhanced foregrounds. First, a lightweight PC2f module is designed to replace a part of the C2f module in the original Backbone. This modification reduces the number of parameters and computational load, enriches the gradient flow, retains more shallow information, and ultimately enhances detection performance while maintaining a lightweight design. Next, the study designs a Foreground Enhancement Module (FEM) and incorporates it into the Neck position to effectively amplify the foreground information and reduce background noise. Finally, the study adds a small-target detection layer to extract shallow features from high-resolution images, thereby improving the ability of the model to detect small-target traffic signs. Experimental results show that the optimized model achieves a mAP50 of 82.5% and 95.3% on the CCTSDB 2021 and GTSDB datasets, which is an improvement of 3.6 and 1 percentage points over the original model, respectively, with a reduction in model weight size by 0.22×106. These results confirm the effectiveness of the proposed model for practical applications.

  • Graphics and Image Processing
    SHA Yuyang, LU Jingtao, DU Haofan, ZHAI Xiaobing, MENG Weiyu, LIAN Xu, LUO Gang, LI Kefeng
    Computer Engineering. 2025, 51(7): 314-325. https://doi.org/10.19678/j.issn.1000-3428.0068674

    Image segmentation is a crucial technology for environmental perception, and it is widely used in various scenarios such as autonomous driving and virtual reality. With the rapid development of technology, computer vision-based blind guiding systems are attracting increasing attention as they outperform traditional solutions in terms of accuracy and stability. The semantic segmentation of road images is an essential feature of a visual guiding system. By analyzing the output of algorithms, the guiding system can understand the current environment and aid blind people in safe navigation, which helps them avoid obstacles, move efficiently, and get the optimal moving path. Visual blind guiding systems are often used in complex environments, which require high running efficiency and segmentation accuracy. However, commonly used high-precision semantic segmentation algorithms are unsuitable for use in blind guiding systems owing to their low running speed and a large number of model parameters. To solve this problem, this paper proposes a lightweight road image segmentation algorithm based on multiscale features. Unlike existing methods, the proposed model contains two feature extraction branches, namely, the Detail Branch and Semantic Branch. The Detail Branch extracts low-level detail information from the image, while the Semantic Branch extracts high-level semantic information. Multiscale features from the two branches are processed and used by the designed feature mapping module, which can further improve the feature modeling performance. Subsequently, a simple and efficient feature fusion module is designed for the fusion of features with different scales to enhance the ability of the model in terms of encoding contextual information by fusing multiscale features. A large amount of road segmentation data suitable for blind guiding scenarios are collected and labeled, and a corresponding dataset is generated. The model is trained and tested on the dataset. The experimental results show that the mean Intersection over Union (mIoU) of the proposed method is 96.5%, which is better than that of existing image segmentation models. The proposed model can achieve a running speed of 201 frames per second on NVIDIA GTX 3090Ti, which is higher than that of existing lightweight image segmentation models. The model can be deployed on NVIDIA AGX Xavier to obtain a running speed of 53 frames per second, which can meet the requirements for practical applications.

  • Artificial Intelligence and Pattern Recognition
    LI Bowen, DING Muheng, FANG Meihua, ZHU Guiping, WEI Zhiyong, CHENG Wei, LI Yayun, BIAN Shuangshuang
    Computer Engineering. 2025, 51(10): 87-96. https://doi.org/10.19678/j.issn.1000-3428.0069857

    Driver fatigue is a major cause of traffic accidents, and driver fatigue state classification based on Electroencephalograms (EEGs) is an important task in the field of artificial intelligence. In recent years, deep learning models that incorporate attention mechanisms have been widely applied to EEG-based fatigue recognition. While these approaches have shown promise, several studies disregard the inherent features of EEG data itself. Additionally, the exploration of the mechanisms and effects of attention on the classifier is vague, which results in failure to explain the specific effects of different attention states on classification performance. Therefore, this study selects the SEED-VIG data as the research object and adopts the ReliefF feature selection algorithm to construct optimized models of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, and Support Vector Machine (SVM) based on self attention, multihead attention, channel attention, and spatial attention mechanisms. Experimental results on the EEG data included in the SEED-VIG dataset show that the performance of several neural network optimization models based on multimodal attention mechanisms has improved in terms of accuracy, recall rate, F1 score, and other indicators. Among them, the Convolutional Block Attention Module (CBAM)-CNN model, which can enhance spatial and channel information, achieves the best performance with 84.7% mean accuracy with 0.66 standard deviation.

  • Artificial Intelligence and Pattern Recognition
    ZHANG Hong, LI Feng, MA Yanhong, JI Wenxuan, ZHENG Qipeng
    Computer Engineering. 2025, 51(10): 140-149. https://doi.org/10.19678/j.issn.1000-3428.0069489

    Accurate photovoltaic power prediction is crucial for enhancing grid stability and improving energy utilization efficiency. To address the limitations of existing methods, which struggle to simultaneously consider both long-term dependencies and short-term variation patterns of photovoltaic power, this study proposes a novel photovoltaic power prediction method named Solarformer. This method integrates a Pyramid Attention Module (PAM) with a Temporal Convolutional Network (TCN) to optimize the Transformer architecture. First, multiple feature selection mechanisms are employed to screen the input features, to enhance the model′s ability to characterize photovoltaic data features. Second, a coarse-grained construction module and PAM are utilized to optimize the Transformer encoder, capturing the long-term temporal dependency features of photovoltaic power at multiple scales. Third, a constraint mechanism based on the sunrise-sunset effect of photovoltaic power and the TCN are employed to optimize the Transformer decoder, strengthening the model′s ability to capture short-term variation features of photovoltaic power and better model its short-term variation patterns. Experimental results on the Sanyo dataset from Australia demonstrate that Solarformer can effectively improve photovoltaic power forecasting accuracy. Compared with the DLinear model, it reduces the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Symmetric Mean Absolute Percentage Error (SMAPE) by approximately 7.45%, 6.99%, and 14.10%, respectively.

  • Artificial Intelligence and Pattern Recognition
    SHEN Sitong, WANG Yaowu, XIE Zaipeng, TANG Bin
    Computer Engineering. 2025, 51(6): 102-115. https://doi.org/10.19678/j.issn.1000-3428.0070739

    Multi-Agent Reinforcement Learning (MARL) plays a crucial role in solving complex cooperative tasks. However, traditional methods face significant limitations in dynamic environments and information nonstationarity. To address these challenges, this paper proposes a Role learning-based Multi-Agent reinforcement learning framework (RoMAC). The framework employs role division based on action attributes and uses a role assignment network to dynamically allocate roles to agents, thereby enhancing the efficiency of multiagent collaboration. The framework adopts a hierarchical communication design, including inter-role communication based on attention mechanisms and inter-agent communication guided by mutual information. In interrole communication, it leverages attention mechanisms to generate efficient communication messages for coordination between role delegates. In inter-agent communication, it uses mutual information to generate targeted information and improve decision-making quality within role groups. Experiments conducted in the StarCraft Multi-Agent Challenge (SMAC) environment show that RoMAC achieves an average win rate improvement of approximately 8.62 percentage points, a reduction in convergence time by 0.92×106 timesteps, and a 28.18 percentage points average decrease in communication load. Ablation studies further validate the critical contributions of each module in enhancing the performance, demonstrating the robustness and flexibility of the model. Overall, the experimental results indicate that RoMAC offers significant advantages in MARL and cooperative tasks, providing reliable support to efficiently address complex challenges.

  • Research Hotspots and Reviews
    Mayilamu Musideke, GAO Yuxin, ZHANG Situo, FENG Ke, Abudukelimu Abulizi, Halidanmu Abudukelimu
    Computer Engineering. 2025, 51(8): 16-38. https://doi.org/10.19678/j.issn.1000-3428.0070619

    With the rapid advancement of general artificial intelligence technology, the application of foundational models across various fields has gained increasing attention. In image segmentation, the Segment Anything Model (SAM), as a foundational model, demonstrates notable advantages in enhancing image comprehension and processing efficiency. While SAM achieves state-of-the-art performance in image segmentation, further optimization in power consumption, computational efficiency, and cross-domain adaptability is required. This review provides an in-depth exploration of the potential improvements to SAM across several crucial dimensions, such as enhancing speed and computational efficiency, improving model accuracy and robustness, increasing adaptability and generalization, optimizing prompt engineering, and boosting data utilization and transfer learning capabilities. With these enhancements, SAM is expected to sustain high efficiency in highly complex tasks and better meet requirements of various fields and application contexts. In addition, this review summarizes the practical applications of SAM in various fields, including medical imaging, remote sensing, and the mechanical industry, and demonstrates the suitability and challenges of the model in different scenarios. Moreover, this review provides a detailed overview of commonly used datasets and evaluation metrics in the field of image segmentation. Through experimental comparative analyses, the impact of Vision Transformer (ViT) variants on the performance of SAM is assessed, along with performance evaluations of enhanced models, such as EfficientSAM, EfficientViT-SAM, MobileSAM, and RobustSAM. The challenges faced by SAM and its improved models in real-world applications are also discussed, and future research directions are proposed. This review aims to provide researchers with a comprehensive understanding of the advancements and applications of SAM and its variants, offering insights that may inform the development of new models.

  • Space-Air-Ground Integrated Computing Power Networks
    WANG Kewen, ZHANG Weiting, SUN Tong
    Computer Engineering. 2025, 51(5): 52-61. https://doi.org/10.19678/j.issn.1000-3428.0069471

    In response to the increasing demand for fast response and large-scale coverage in application scenarios such as satellite data processing and vehicle remote control, this study focuses on utilizing hierarchical control and artificial intelligence technology to design a resource scheduling mechanism for space-air-ground integrated computing power networks. Air, space, and ground networks are divided into three domains, and domain controllers are deployed for resource management in the corresponding local domain. The areas are divided based on the coverage of satellites and drones to ensure that they can achieve effective service guarantees, efficient data transmission, and task processing. A multi-agent reinforcement learning-based scheduling algorithm is proposed to optimize resource utilization in space-air-ground integrated computing power networks, considering each domain controller is treated as an agent with task scheduling and resource allocation capabilities. Intelligent resource scheduling and efficient resource allocation for computing tasks are realized through collaborative learning and distributed decision-making with satisfactory delay and energy consumption constraints. Computing tasks are generated in different scenarios and processed in real time. Simulation results show that the proposed mechanism can effectively improve resource utilization and shorten task response time.

  • Space-Air-Ground Integrated Computing Power Networks
    MO Dingtao, JU Ying, LI Wenjin, ZHANG Yasheng, HE Ci, DONG Feihu
    Computer Engineering. 2025, 51(5): 9-19. https://doi.org/10.19678/j.issn.1000-3428.0069654

    Satellite networks have wide coverage, strong mobility, and ultralow power consumption, which allow them to act as an extension to ground communication networks, thereby promoting the construction of integrated space-ground networks. However, the opening and popularization of satellite services have increased network traffic and made it more complex, making their management and service scheduling challenging. Thus, designing an efficient network traffic classification method and allocating reasonable computing resources to different types of satellite network traffic have become critical to alleviating the pressure on satellite networks. Traditional network traffic classification methods based on ports, payloads, statistics, and behavior have issues concerning effectiveness and privacy, making them inadequate for complex network services. Various technologies are widely applied in the development of large models. Therefore, to enhance the operational efficiency of satellite networks and optimize their computing power, this study proposes a network traffic classification method based on the Global Perception Module (GPM) and ViT (Vision Transformer) model. This method transforms network traffic data into grayscale images and extracts features to fully capture global and local information. The processed data are then input into the ViT model, which leverages its multihead attention mechanism to extract data correlation information and enhance classification capability. Experimental results indicate that the accuracy of the GPM-ViT model reaches 97.86%, which is a significant improvement over that of baseline models.

  • AI-Enabled Vehicular Edge Computing
    ZHU Siyuan, LI Jiasheng, ZOU Danping, HE Di, YU Wenxian
    Computer Engineering. 2025, 51(9): 14-24. https://doi.org/10.19678/j.issn.1000-3428.0069534

    Detecting defects on unstructured roads is important for road traffic safety; however, annotated datasets required for detection is limited. This study proposes the Multi-Augmentation with Memory (MAM) semi-supervised object detection algorithm to address the lack of annotated datasets for unstructured roads and the inability of existing models to learn from unlabeled data. First, a cache mechanism is introduced to store the positions of the bounding box regression information for unannotated images and images with pseudo annotations, avoiding computational resource wastage caused by subsequent matching. Second, the study proposes a hybrid data augmentation strategy that mixes the cached pseudo-labeled images with unlabeled images inputted into the student model, to enhance the model′s generalizability to new data and balance the scale distribution of images. The MAM semi-supervised object detection algorithm is not limited by the object detection model and better maintains the consistency of object bounding boxes, thus avoiding the need to compute consistency loss. Experimental results show that the MAM algorithm is superior to other fully supervised and semi-supervised learning algorithms. On a self-built unstructured road defect dataset, called Defect, the MAM algorithm achieves improvements of 6.8, 11.1, and 6.0 percentage points in terms of mean Average Precision (mAP) compared to those of the Soft Teacher algorithm in scenarios with annotation ratios of 10%, 20%, and 30%, respectively. On a self-built unstructured road pothole dataset, called Pothole, the MAM algorithm achieves mAP improvements of 5.8 and 4.3 percentage points compared to those of the Soft Teacher algorithm in scenarios with annotation ratios of 15% and 30%, respectively.