作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2026, Vol. 52 ›› Issue (1): 266-281. doi: 10.19678/j.issn.1000-3428.0069675

• 网络空间安全 • 上一篇    下一篇

基于遥感图像场景分类的频域量化对抗攻击

王熠1, 李智1,*(), 张丽1, 石雪丽1, 刘登波1, 卢妤2   

  1. 1. 贵州大学计算机科学与技术学院, 贵州 贵阳 550025
    2. 贵州电网有限责任公司, 贵州 贵阳 550002
  • 收稿日期:2024-03-29 修回日期:2024-05-31 出版日期:2026-01-15 发布日期:2024-07-11
  • 通讯作者: 李智
  • 作者简介:

    王熠(CCF学生会员), 女, 硕士研究生, 主研方向为深度学习、图像对抗

    李智(CCF会员、通信作者), 教授、博士

    张丽, 讲师、硕士

    石雪丽, 硕士研究生

    刘登波, 硕士研究生

    卢妤, 高级工程师

  • 基金资助:
    国家自然科学基金(62062023)

Frequency-Domain Quantification Adversarial Attacks Based on Remote Sensing Image Scene Classification

WANG Yi1, LI Zhi1,*(), ZHANG Li1, SHI Xueli1, LIU Dengbo1, LU Yu2   

  1. 1. School of Computer Science and Technology, Guizhou University, Guiyang 550025, Guizhou, China
    2. Guizhou Power Grid Co., Ltd., Guiyang 550002, Guizhou, China
  • Received:2024-03-29 Revised:2024-05-31 Online:2026-01-15 Published:2024-07-11
  • Contact: LI Zhi

摘要:

深度神经网络在遥感图像的场景分类任务中取得巨大成功。然而, 由于对抗样本具有较强的可迁移性, 基于遥感图像的场景分类网络的脆弱性不容忽视。为了增强遥感图像场景分类网络的鲁棒性, 确保其在各种环境和条件下的可靠性和安全性, 有效提高其实际应用价值, 提出一种频域的量化对抗攻击(FDQ)方法。首先, 将输入图像进行离散余弦变换(DCT), 在频域中利用量化筛选器有效捕捉使图像正确分类的关键特征在频域中的突出区域; 然后, 提出一个基于类的注意力损失, 使得量化筛选器逐渐丢失这些使图像正确分类的关键特征, 模型的注意力逐渐偏离与原始类别毫不相干的特征和区域。所提方法利用模型的注意力分布实现特征层级的黑盒攻击, 通过找到不同网络中的共同防御漏洞, 实现针对遥感图像生成且具有通用性的对抗样本。实验结果表明, FDQ方法可在遥感图像场景分类任务中成功攻击大多数最先进的深度神经网络, 与目前最先进的基于遥感图像场景分类任务的攻击方法相比, FDQ在基准数据集UCM和AID上基于RegNetX-400MF架构的攻击成功率分别提高了35.43%和23.63%。实验表明FDQ具有良好的攻击性和可迁移性, 很难被防御系统抵御。

关键词: 对抗攻击, 对抗样本, 深度神经网络, 遥感图像, 场景分类, 黑盒攻击

Abstract:

Deep neural networks have achieved significant success in remote sensing image scene classification. However, because of the strong transferability of adversarial samples, the vulnerability of scene classification networks based on remote sensing images cannot be ignored. To enhance the robustness of remote sensing image scene classification networks, ensure their reliability and security in various environments and conditions, and effectively improve their practical application value, this study proposes a Frequency-Domain Quantization (FDQ) adversarial attack method. First, the input image is subjected to a Discrete Cosine Transform (DCT), and a quantization filter is used in the frequency domain to effectively capture the prominent regions of key features that enable the image to be correctly classified in the frequency domain. Then, a class-based attention loss is proposed, which gradually causes the quantization filter to lose these key features that enable correct image classification, and the model's attention gradually deviates from features and regions that are completely unrelated to the original category. The proposed method uses the attention distribution of a model to implement black-box attacks at the feature level. Universal adversarial samples are obtained for remote sensing image generation by identifying common defense vulnerabilities in different networks. Experimental results demonstrate that the FDQ method can successfully attack most of the advanced deep neural networks in remote sensing image scene classification tasks. Compared with the current state-of-the-art attack methods based on remote sensing image scene classification tasks, FDQ's attack success rate based on the RegNetX-400MF architecture on the UCM and AID benchmark datasets increases by 35.43% and 23.63%, respectively. Experiments have shown that FDQ has good attack and transferability, making it more difficult for defense systems to resist.

Key words: adversarial attacks, adversarial examples, deep neural networks, remote sensing images, scene classification, black-box attacks