作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

基于激励机制的自适应隐私保护联邦学习方案

  • 发布日期:2025-05-19

Adaptive Privacy-Preserving Federated Learning Scheme Based on Incentive Mechanism

  • Published:2025-05-19

摘要: 联邦学习利用客户端的数据资源协作训练全局模型,全局模型效果取决于客户端的数据资源质量和其参与的积极性。因此,客户端期望在贡献高质量数据资源后得到相应价格补偿以提升自己参与训练的积极性。此外,由于客户端上传的本地模型参数蕴含着私有数据资源信息,进而面临着隐私泄露的风险。为了解决这些问题,提出了一个基于激励机制的自适应隐私保护联邦学习方案。首先,设计了先决博弈拍卖机制,确保客户端诚实报告成本并实现纳什均衡(NE)。其次,基于训练时间和模型损失值设计了训练质量评估算法,根据总训练质量评估值的高低给予客户端相应的价格补偿,激励高质量数据的客户端参与训练。最后,采用自适应差分隐私技术,对本地模型参数进行扰动,通过动态噪声分配提升模型可用性。理论分析表明,设计方案满足安全性和隐私保护需求,实验结果也验证了其有效性。

Abstract: Federated learning leverages client data resources to collaboratively train a global model, whose performance depends on the quality of client data and their level of participation. Clients expect to receive appropriate compensation after contributing high-quality data to enhance their motivation for participation. Additionally, since the local model parameters uploaded by clients contain information about private data resources, they face the risk of privacy leakage. To address these challenges, this paper proposes an incentive-based adaptive privacy-preserving federated learning scheme. First, a pre-decision game auction mechanism is designed to ensure that clients truthfully report their costs while achieving Nash Equilibrium (NE). Second, a training quality evaluation algorithm is developed based on training time and model loss, which determines client compensation according to the overall training quality evaluation score, thereby incentivizing high-quality data contributors to participate in training. Finally, an adaptive differential privacy technique is employed to perturb local model parameters, enhancing model utility through dynamic noise allocation. Theoretical analysis demonstrates that the proposed scheme satisfies security and privacy protection requirements, while experimental results validate its effectiveness.