作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2021, Vol. 47 ›› Issue (12): 163-170. doi: 10.19678/j.issn.1000-3428.0060571

• 网络空间安全 • 上一篇    下一篇

基于U-Net的对抗样本防御模型

赖妍菱1, 石峻峰1, 陈继鑫1, 白汉利2, 唐晓澜1, 邓碧颖1, 郑德生1   

  1. 1. 西南石油大学 计算机科学学院, 成都 610500;
    2. 中国空气动力研究与发展中心, 四川 绵阳 621000
  • 收稿日期:2021-01-12 修回日期:2021-04-25 发布日期:2021-05-08
  • 作者简介:赖妍菱(1997-),女,硕士研究生,主研方向为对抗样本防御;石峻峰,学士;陈继鑫,硕士研究生;白汉利,高级工程师、硕士;唐晓澜、邓碧颖,硕士研究生;郑德生,副研究员、博士。
  • 基金资助:
    四川省重大科技专项“新时代互联网+人工智能个性定制化智能教育研发与应用”(18ZDZX)。

Adversarial Example Defense Model Based on U-Net

LAI Yanling1, SHI Junfeng1, CHEN Jixin1, BAI Hanli2, TANG Xiaolan1, DENG Biying1, ZHENG Desheng1   

  1. 1. School of Computer Science, Southwest Petroleum University, Chengdu 610500, China;
    2. China Aerodynamics Research and Development Center, Mianyang, Sichuan 621000, China
  • Received:2021-01-12 Revised:2021-04-25 Published:2021-05-08

摘要: 对抗攻击是指对图像添加微小的扰动使深度神经网络以高置信度输出错误分类。提出一种对抗样本防御模型SE-ResU-Net,基于图像语义分割网络U-Net架构,引入残差模块和挤压激励模块,通过压缩和重建方式进行特征提取和图像还原,破坏对抗样本中的扰动结构。实验结果表明,SE-ResU-Net模型能对MI-FGSM、PGD、DeepFool、C&W攻击的对抗样本实施有效防御,在CIFAR10和Fashion-MNIST数据集上的防御成功率最高达到87.0%和93.2%,且具有较好的泛化性能。

关键词: 深度神经网络, 图像分类, 对抗攻击, 对抗样本, 防御模型, CIFAR10数据集, Fashion-MNIST数据集

Abstract: Adversarial attack refers to adding a small disturbance to the image to make the deep neural network output the wrong classification with high confidence.An adversarial sample defense model named SE-ResU-Net is proposed, based on the image semantic segmentation network U-Net architecture, the residual module and the extrusion excitation module are introduced, and feature extraction and image restoration are performed through compression and reconstruction methods, destroying the perturbation structure in the adversarial sample.Experimental results show that SE-ResU-Net can effectively defend against MI-FGSM, PGD, DeepFool, and C&W attack adversarial samples.The defense success rate on CIFAR10 and Fashion-MNIST datasets is up to 87.0% and 93.2%, and has good generalization performance.

Key words: deep neural network, image classification, adversarial attack, adversarial example, defense model, CIFAR10 dataset, Fashion-MNIST dataset

中图分类号: