作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2023, Vol. 49 ›› Issue (9): 144-157. doi: 10.19678/j.issn.1000-3428.0065590

• 网络空间安全 • 上一篇    下一篇

基于粒子群优化的差分隐私深度学习模型

张攀峰1,2, 吴丹华1, 董明刚1,2,*   

  1. 1. 桂林理工大学 信息科学与工程学院, 广西 桂林 541006
    2. 广西嵌入式技术与智能系统重点实验室, 广西 桂林 541006
  • 收稿日期:2022-08-25 出版日期:2023-09-15 发布日期:2023-09-14
  • 通讯作者: 董明刚
  • 作者简介:

    张攀峰(1978-), 男, 讲师、博士, 主研方向为信息存储、信息安全、人工智能

    吴丹华, 硕士

  • 基金资助:
    国家自然科学基金(61862019); 广西科技基地和人才专项(2018AD19136); 桂林理工大学科研启动基金(GLUTQD2017065); 广西嵌入式技术与智能系统重点实验室项目

Differential Privacy Deep Learning Model Based on Particle Swarm Optimization

Panfeng ZHANG1,2, Danhua WU1, Minggang DONG1,2,*   

  1. 1. College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, Guangxi, China
    2. Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin 541006, Guangxi, China
  • Received:2022-08-25 Online:2023-09-15 Published:2023-09-14
  • Contact: Minggang DONG

摘要:

在深度模型差分隐私保护中,过大的梯度扰动噪声会造成数据可用性下降。提出一种基于粒子群优化算法的深度学习模型,根据粒子群优化策略,将粒子的位置映射到网络参数,寻找个体历史最优位置和全局历史最优位置。对全局最优粒子位置所得梯度进行扰动重新参与模型训练,在不改变参数和梯度结构的前提下利用网络传播属性对噪声参数进行优化,运用带扰动的位置参数获取目标函数,最小化经验风险函数,降低噪声对模型效用的损害。在联邦学习模式下,本地客户端利用粒子位置参数更新出本地环境中个体和全局历史最优位置,找到当前最优参数加以扰动,再上传至中心服务器。当每个客户端再次训练时,从服务器请求获取优化、扰动和聚合后的广播参数,参与合作学习。实验结果表明,该算法模型在保护隐私的同时能更好地保留模型可用性,相比DP-SGD算法,在Mnist、Fashion-Mnist和Cifar-10数据集上的准确率分别提高了5.03%、9.23%和22.77%,能够加快联邦学习中的模型收敛,在独立同分布和非独立同分布数据上都具有较好的表现。

关键词: 差分隐私, 噪声参数, 粒子群优化, 网络训练, 联邦学习

Abstract:

Excessive noise from gradient perturbations can lead to usability degradation in deep models with differential privacy protection. A deep learning model with Differential Privacy(DP) based on the Particle Swarm Optimization(PSO) algorithm is proposed to address this issue. The algorithm maps particle positions to network parameters according to the PSO strategy, identifying individual and global historical optimal positions. These global optimal particle positions are then used as gradients, which are perturbed and incorporated into model training. By leveraging network propagation properties without altering the parameters and gradient structure, noise parameters are optimized to realize the objective function. The model is then updated using optimized parameters with perturbation, minimizing the empirical risk function and reducing the impact of noise on model utility. In federated learning, local clients update individual and global historical optimal positions using particle position parameters within their local environment. They identify current optimal parameters to be perturbed before uploading them to the central server. Clients then request optimized, perturbed, and aggregated broadcast parameters from the server for further training in this cooperative learning process. Experimental results demonstrate that the proposed algorithm model effectively preserves model utility while protecting privacy. Compared to the DP-SGD algorithm, the accuracy of Mnist, FashionMnist, and Cifar-10 datasets increases by 5.03%, 9.23%, and 22.77%, respectively.The proposed algorithm accelerates model convergence in federated learning and exhibits strong performance with IID and non-IID data.

Key words: Differential Privacy(DP), noisy parameters, Particle Swarm Optimization(PSO), network training, federated learning