Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering ›› 2025, Vol. 51 ›› Issue (6): 223-235. doi: 10.19678/j.issn.1000-3428.0069133

• Cyberspace Security • Previous Articles     Next Articles

A Privacy-Preserving Federated Learning Scheme Against Poisoning Attack

YAO Yupeng1, WEI Lifei1,2, ZHANG Lei1,*()   

  1. 1. College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
    2. College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
  • Received:2023-12-29 Online:2025-06-15 Published:2024-06-20
  • Contact: ZHANG Lei

一种隐私保护的抗投毒攻击联邦学习方案

姚玉鹏1, 魏立斐1,2, 张蕾1,*()   

  1. 1. 上海海洋大学信息学院,上海 201306
    2. 上海海事大学信息工程学院,上海 201306
  • 通讯作者: 张蕾
  • 基金资助:
    国家自然科学基金面上项目(61972241); 上海市自然科学基金面上项目(22ZR1427100); 上海市软科学研究项目(23692106700)

Abstract:

Federated learning enables participants to collaboratively model without revealing their raw data, thereby effectively addressing the privacy issue of distributed data. However, as research advances, federated learning continues to face security concerns such as privacy inference attacks and malicious client poisoning attacks. Existing improvements to federated learning mainly focus on either privacy protection or against poisoning attacks without simultaneously addressing both types of attacks. To address both inference and poisoning attacks in federated learning, a privacy-preserving against poisoning federated learning scheme called APFL is proposed. This scheme involves the design of a model detection algorithm that utilizes Differential Privacy (DP) techniques to assign corresponding aggregation weights to each client based on the cosine similarity between the models. Homomorphic encryption techniques are employed for the weighted aggregation of the local models. Experimental evaluations of the MNIST and CIFAR10 datasets demonstrate that APFL effectively filters malicious models and defends against poisoning attacks while ensuring data privacy. When the poisoning ratio is no more than 50%, APFL achieves a model performance consistent with the Federated Averaging (FedAvg) scheme in a non-poisoned environment. Compared with the Krum and FLTrust schemes, APFL exhibits average reductions of 19% and 9% in model test error rate, respectively.

Key words: federated learning, Differential Privacy (DP), homomorphic encryption, privacy-preserving, poisoning attack

摘要:

联邦学习实现了各参与方在不泄露原始数据的前提下联合建模,有效解决了分布式数据隐私的问题,但随着研究的深入,联邦学习还存在隐私推断攻击或恶意客户端投毒攻击等安全问题。现有联邦学习改进方案大多仅从隐私保护或抗投毒攻击方面进行改进,不能兼顾两种攻击。为了同时解决联邦学习中的推断攻击和投毒攻击,提出一个隐私保护的抗投毒攻击联邦学习(APFL)方案。设计一个模型检测算法,使用差分隐私(DP)技术,根据模型间余弦相似度赋予各客户端相应聚合权重,使用同态加密技术将本地模型加权聚合。在MNIST和CIFAR10数据集上的实验结果表明,APFL在保证数据隐私的同时能有效筛选恶意模型,抵御投毒攻击,当投毒比例不超过50%时,APFL模型性能与无投毒攻击环境下联邦平均(FedAvg)方案一致,模型测试错误率较Krum方案平均降低19%,较FLTrust方案平均降低9%。

关键词: 联邦学习, 差分隐私, 同态加密, 隐私保护, 投毒攻击