Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Collections

大模型时代的服务计算专题
Sort by Default Latest Most read  
Please wait a minute...
  • Select all
    |
  • Service Computing in the Era of Large Language Models
    LIN Dan, LU Shunfeng, LIU Ziyan, ZHANG Bozhao, HE Long, JIANG Zigui, WU Jiajing, ZHENG Zibin
    Computer Engineering. 2026, 52(1): 1-21. https://doi.org/10.19678/j.issn.1000-3428.0253233

    Blockchain has gradually evolved into a critical infrastructure that supports the digital economy. However, its inherent characteristics such as anonymity, cross-chain interoperability, and multi-party participation have led to frequent security incidents, including fraud, money laundering, and cyberattacks, which pose serious threats to the stability and compliance of the blockchain ecosystem. Although existing analytical tools and methods have made notable progress in blockchain service security, they suffer from limited generalizability, insufficient reasoning capabilities, and poor adaptability to the evolution of complex business logic. The rapid development of generative Large Language Model (LLM) has significantly reshaped the service computing paradigm. With their strong capabilities in natural language understanding, knowledge reasoning, and multimodal integration, LLM provide new perspectives and technical pathways for research on blockchain service security. This paper systematically reviews the progress of LLM applications in three major areas: pre-event smart contract auditing, in-event anomaly detection, and post-event cross-chain behavior correlation. Further, it summarizes their advantages and limitations and highlights representative practices of LLM-enabled blockchain security. Finally, open research challenges and future directions are discussed, aiming to provide insights for building a trustworthy, interpretable, and efficient framework for blockchain service computing and governance.

  • Service Computing in the Era of Large Language Models
    ZHAO Xudong, WU Hongyue, MENG Ke, XU Xiaolong, DOU Wanchun
    Computer Engineering. 2026, 52(1): 61-75. https://doi.org/10.19678/j.issn.1000-3428.0252977

    With the rapid development of the Internet, cloud computing, and artificial intelligence, service recommendation has become a key technique in service computing. It helps users find appropriate services quickly and accurately, improves resource utilization, and enhances user experience. This paper presents a systematic review of the research progress in service recommendation and summarizes representative studies. This review introduces three main recommendation methods: traditional method, context-aware, and neural network-based. Each category is described in terms of fundamental principles, typical applications, advantages, and limitations. This paper also discusses the major challenges in service recommendation, including data sparsity and cold start; incomplete and noisy Quality of Service (QoS) data; dynamic changes in services and contexts; insufficient explainability; and issues of real-time performance, scalability, privacy, and security. Finally, this paper presents an overview of the limitations of current research and explores future research directions. Emerging technologies, such as big data analytics, Knowledge Graphs (KGs), deep learning, Large Language Models (LLMs), and reinforcement learning, have been highlighted as promising approaches for improving the intelligence, personalization, and trustworthiness of service recommendations. This review provides a comprehensive understanding of the field and serves as a valuable reference for further research and practical applications.

  • Service Computing in the Era of Large Language Models
    LIU Ronglong, LI Ziwei, WAN Yue, WU Jiajing, JIANG Zigui
    Computer Engineering. 2026, 52(1): 76-85. https://doi.org/10.19678/j.issn.1000-3428.0252752

    As the paradigm of ″decentralized next-generation Internet, ″ Web3, relying on blockchain technology, has become an emerging field with great potential in the digital intelligence service ecosystem. However, Web3 phishing websites pose a serious threat to ecological health. Phishers carefully design domain names as the primary bait, inducing users to visit and engage in high-risk operations to steal digital assets. Currently, the antiphishing works of Web3 primarily focus on phishing account detection, phishing transaction detection, and phishing gang mining, whereas the existing phishing website domain name detection primarily targets traditional phishing websites, which have limitations such as insufficient adaptability and a lack of systematic analysis. To this end, a detection method called WPWHunter is proposed for Web3 phishing website domain names, which conducts multidimensional analysis on the detected real Web3 phishing websites and explores the potential application of Large Language Model (LLM) in web page analysis. The WPWHunter algorithm detects three features in Web3 phishing website domain names: inducing words, visual deception, and item name imitation. The experimental results show that WPWHunter can effectively detect suspicious Web3 phishing domains with a G-means index of 0.769 on a test set, which is 0.048 higher than that of the best-performing baseline method. Additionally, as a supplementary exploratory experiment, three universal LLM are used to analyze the content of Web3 phishing websites that WPWHunter failed to detect and the logic used by LLM to determine Web3 phishing websites is summarized.

  • Service Computing in the Era of Large Language Models
    CHU Zeshi, DUAN Yucong, WANG Minhui
    Computer Engineering. 2026, 52(1): 86-94. https://doi.org/10.19678/j.issn.1000-3428.0253161

    Purpose-driven Artificial Intelligence (AI) systems must exhibit adaptive purpose perception, dynamic adjustments, and multi-level feedback when operated in complex and evolving environments. However, traditional AI models lack a unified mechanism for modeling purpose lifecycle, thereby resulting in challenges in behavior traceability, control, and optimization, which, in turn, limit interpretability and long-term effectiveness. This paper proposes a Data-Information-Knowledge-Wisdom-Purpose (DIKWP)-based semantic framework for purpose lifecycle management oriented toward cognitive evolutionary pathways. This mechanism consists of five semantic stages: data-layer dynamic verification, information-layer migration response, knowledge-layer logical reconstruction, wisdom-layer value evolution, and purpose-layer goal closure and conflict regulation. A multi-level, multi-goal, and multi-feedback semantic governance structure is formed. In addition, multi-layer graph modeling and cognitive space differentiation are introduced, specifically between conceptual and semantic spaces, to enable structured and visual modeling of purpose generation, updating, and tuning. By integrating the dual-loop structure of "experience-narrative" from artificial consciousness theory, the purpose stability and adaptability of the system in interactive environments are enhanced. The proposed mechanism was theoretically validated in smart home and smart city scenarios. The experimental results demonstrate its generality, scalability, and robustness, offering theoretical and engineering support for value alignment, semantic safety, and autonomous evolution in sovereign AI systems.

  • Service Computing in the Era of Large Language Models
    ZHANG Longyao, WEN Dongxin, MA Zhuangyu, SHU Yanjun, LI Qing, LIU Mingyi, ZUO Decheng
    Computer Engineering. 2026, 52(1): 22-32. https://doi.org/10.19678/j.issn.1000-3428.0252754

    Large Language Model (LLM)-based Multi-Agent System (MAS) has demonstrated significant potential in handling complex tasks. Their distributed nature and interaction uncertainty can lead to diverse anomalies that threaten system reliability. This paper presents a comprehensive review, identifying and classifying these anomalies systematically. Seven representative multi-agent systems and their corresponding datasets are selected, accounting for 13 418 operational traces, and a hybrid data analysis method is employed, combining preliminary LLM analysis with expert manual validation. A fine-grained, four-level anomaly classification framework is constructed, encompassing the following anomalies: model understanding and perception, agent interaction, task execution, and external environment. Typical cases are analyzed to reveal the underlying logic and external causes of each type of anomaly. Statistical analysis indicates that model understanding and perception anomalies account for the highest proportion, with ″context hallucination″ and ″task instruction misunderstanding″ being the primary issues. Agent interaction anomalies represent 16.8%, primarily caused by ″information concealment″. Task execution anomalies account for 27.1%, mainly characterized by ″repetitive decision errors″. External environment anomalies account for 18.3%, with ″memory conflicts″ as the predominant factor. In addition, the model perception and understanding of anomalies often act as root causes, triggering anomalies at other levels, highlighting the importance of enhancing fundamental model capabilities. These classification and root cause analyses aim to provide theoretical support and practical reference for building highly reliable LLM-based multi-agent systems.

  • Service Computing in the Era of Large Language Models
    ZHANG Junna, WANG Hongzun, DING Chuntao
    Computer Engineering. 2026, 52(1): 33-60. https://doi.org/10.19678/j.issn.1000-3428.0252721

    Post-Training Quantization (PTQ) is an efficient model compression method that converts the parameters of high-precision floating-point models into low-bit integer representations without requiring retraining, using only a small amount of unlabeled calibration data. This method significantly reduces storage and computational overhead while maximizing the retention of the original model's inference accuracy; therefore, it is widely recognized and adopted in both academia and industry. This paper systematically summarizes the progress of research on PTQ from four dimensions: quantization steps, method classification, tool ecosystem, and application advancements. First, a clear framework for the quantization process is constructed, covering steps such as dynamic range statistics, quantization parameter calculation, weight and activation quantization, error optimization, and model generation. Second, a complete classification system for quantization methods is proposed, which includes quantization granularity, bit width, calibration methods, and structure-guided quantization. Third, the tool ecosystem supporting the large-scale application of PTQ is analyzed, and its value in hardware adaptation and engineering deployment is discussed. Finally, this paper summarizes the progress in the integration and application of PTQ methods and highlights practical challenges, particularly those related to cross-modal consistency, extremely low-bit semantic collapse, and hardware adaptation. These challenges not only reveal the limitations of current technologies but also provide important directions for future research. This review provides a reference framework for PTQ methods in academia and industry, thereby facilitating the widespread application of artificial intelligence in resource-constrained scenarios.