作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

融合参数个性化的自适应差分隐私联邦学习

  • 发布日期:2026-04-14

Adaptive Differential Privacy Federated Learning with Integrated Parameter Personalization

  • Published:2026-04-14

摘要: 个性化联邦学习通过共享训练参数而非数据进行模型训练,但仍易受推理攻击,因此广泛应用差分隐私技术进行防护。针对传统差分隐私个性化联邦学习(Differentially Private Personalized Federated Learning,DP-PFL)中静态模型划分和统一噪声的局限,本文提出了一种融合参数个性化的自适应差分隐私联邦学习框架DP-FedADC。首先,该框架利用设计的参数自适应划分(Adaptive Parameter Partitioning,APP)实现模型参数的动态分析,并自适应划分个性化参数与共享参数。在此基础上,设计差异化更新策略(Differentiated Parameter Update,DPU),通过对不同类型参数施加差异化正则约束,稳定关键参数更新并缓解梯度裁剪对优化方向的影响。其次,提出客户端级自适应隐私预算分配策略(Client-level Adaptive Privacy Budget Allocation,CAPBA),根据客户端个性化参数比例动态调整隐私预算,使高敏感度客户端获得更严格的隐私保护,同时避免对全局收敛起主导作用的参数施加过度噪声扰动,从而抑制隐私噪声在训练后期的累积效应。在MNIST、CIFAR-10、Fashion-Mnist数据集上的实验表明,在严格差分隐私约束下,DP-FedADC显著提升了分类准确率和领域泛化性能,其测试准确率相较基线方法最高提升约2%-4%,且损失值收敛至更低区间。实验结果验证了所提出框架在差分隐私联邦学习场景下的有效性与鲁棒性。

Abstract: Federated learning trains models by sharing model parameters rather than raw data, but it remains vulnerable to inference attacks, which motivates the integration of differential privacy techniques. To address the limitations of static parameter partitioning and uniform noise injection in conventional Differentially Private Federated Learning (DP-FL), this paper proposes an adaptive differentially private federated learning framework with parameter personalization, termed DP-FedADC. The framework introduces Adaptive Parameter Partitioning (APP) to dynamically analyze model parameters and to separate personalized parameters from shared parameters according to their importance. Based on this partitioning, a Differentiated Parameter Update (DPU) strategy is designed to apply distinct regularization constraints to different parameter types, which stabilizes critical parameter updates and mitigates the distortion of optimization directions caused by gradient clipping. In addition, a Client-level Adaptive Privacy Budget Allocation (CAPBA) strategy is proposed to dynamically adjust privacy budgets according to the proportion of personalized parameters at each client, enabling stronger protection for high-sensitivity clients while avoiding excessive noise perturbation on parameters that dominate global convergence. Experiments conducted on MNIST, CIFAR-10, and Fashion-MNIST demonstrate that under strict differential privacy constraints, DP-FedADC consistently improves classification accuracy, convergence speed, and training stability. Compared with existing baselines, the proposed method achieves up to a 2%–4% improvement in test accuracy and converges to a lower loss range, validating its effectiveness and robustness in differentially private federated learning scenarios.