作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2023, Vol. 49 ›› Issue (10): 186-193. doi: 10.19678/j.issn.1000-3428.0065438

• 图形图像处理 • 上一篇    下一篇

无锚框模型类梯度全局对抗样本生成

谢云旭, 吴锡, 彭静   

  1. 成都信息工程大学 计算机学院, 成都 610225
  • 收稿日期:2022-08-04 出版日期:2023-10-15 发布日期:2023-10-10
  • 作者简介:

    谢云旭(1992—),男,硕士研究生,主研方向为对抗攻击、计算机视觉、图像处理

    吴锡,教授、博士

    彭静,博士

  • 基金资助:
    国家自然科学基金(42075142); 四川省科技计划项目(2022YFG0026); 四川省科技计划项目(2021YFG0018); 四川省科技计划项目(2020JDTD0020)

Generation of Gradient Global Adversarial Samples with Anchor-Free Model

Yunxu XIE, Xi WU, Jing PENG   

  1. School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
  • Received:2022-08-04 Online:2023-10-15 Published:2023-10-10

摘要:

在计算机视觉任务中深度神经网络模型易受对抗样本的干扰,泛化性能高的对抗样本会影响更多的模型。为了研究深度神经网络模型的脆弱性进而改善该现状,提出一种基于类梯度的全局对抗样本生成方法。以图像中的目标类为单位快速进行类梯度收集,通过一次性融合同一类别内所有目标的梯度来体现类内相似性与类间差异性。在此基础上,结合数据集中一定比例的图像及对应的图片尺度扰动生成全局扰动。通过上述过程突破模型候选框及图片数量的制约,生成的全局扰动可以对大量数据形成有效的影响。实验结果表明,该方法在Pascal VOC和MS-COCO-keypoints数据集的2种计算机视觉任务中性能均优于PGD、FPE等算法,其攻击成功率比DAG算法高1个百分点,比FPE算法高34个百分点,同时推断扰动速度较快。全局扰动的存在揭示了深度神经网络模型在高维决策边界之间存在一定的几何相关性,利用所提方法有助于深度神经网络抵抗泛化性能更高的全局对抗样本。

关键词: 对抗样本, 全局扰动, 目标检测, 人体姿态估计, 无锚框模型

Abstract:

In computer vision tasks, deep neural network models are susceptible to interference from adversarial samples, and adversarial samples with high generalization performance will more greatly affect models. To investigate the fragility of deep neural network models and improve the current situation, a global adversarial sample generation method based on class gradient is proposed. It quickly collects class gradients based on the target class in the image, reflecting intra-class similarity and inter-class differences by fusing the gradients of all targets within the same class at once. On this basis, global perturbations are generated by combining a certain proportion of images in the dataset with corresponding image scale perturbations. By eliminating the constraints of model candidate boxes and the number of images through the above process, the generated global disturbance can effectively affect a large amount of data. Experimental results show that this method outperforms algorithms such as PGD and FPE in two computer vision tasks on the Pascal VOC and MS-COCO-keypoints datasets. Its attack success rate is 1 percentage point higher than that of the DAG algorithm and 34 percentage points higher than that of the FPE algorithm, and the inference disturbance speed is faster. The existence of global disturbance reveals that the deep neural network model has certain geometric correlation with the high-dimensional decision boundary, and the proposed method can help the deep neural network to resist the global confrontation samples with higher generalization performance.

Key words: adversarial example, global perturbation, object detection, human pose estimation, anchor-free model