作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2025, Vol. 51 ›› Issue (3): 45-53. doi: 10.19678/j.issn.1000-3428.0068941

• 热点与综述 • 上一篇    下一篇

面向交通场景的运动模糊伪装对抗样本生成方法

张肇鑫1, 黄世泽2,*(), 张兵杰1, 沈拓1,3   

  1. 1. 上海市轨道交通结构耐久与系统安全重点实验室, 上海 201804
    2. 同济大学道路与交通工程教育部重点实验室, 上海 201804
    3. 上海理工大学光电与计算机工程学院, 上海 200093
  • 收稿日期:2023-12-01 出版日期:2025-03-15 发布日期:2024-05-10
  • 通讯作者: 黄世泽
  • 基金资助:
    国家重点研发计划(2022YFB4300501); 重庆市自然科学基金(CSTB2022NSCQ-MSX1454)

Camouflaged Adversarial Example Generation Method for the Form of Motion Blur in Traffic Scenes

ZHANG Zhaoxin1, HUANG Shize2,*(), ZHANG Bingjie1, SHEN Tuo1,3   

  1. 1. Shanghai Key Laboratory of Rail Infrastructure Durability and System Safety, Shanghai 201804, China
    2. Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, Shanghai 201804, China
    3. School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2023-12-01 Online:2025-03-15 Published:2024-05-10
  • Contact: HUANG Shize

摘要:

在自动驾驶感知系统中, 卷积神经网络(CNN)作为关键技术在车辆感知和决策中发挥着重要作用。然而, 其面临的对抗样本攻击威胁对自动驾驶系统的安全性和稳定性产生了严重影响。现有的对抗样本生成方法通常直接在图像中添加对抗扰动, 导致对抗样本视觉质量下降, 伪装性不足, 易被人类观察者识别。针对这一挑战, 引入交通场景中车辆运动引起的图像模糊先验知识, 提出一种运动模糊伪装对抗样本生成方法。通过模拟车辆和行人在移动过程中产生的模糊效应, 生成具有运动模糊特征的对抗样本。为了保持图像的运动模糊同时有效实现对抗攻击, 设计一种目标隐身的对抗样本损失函数。实验结果显示, 在ICDAR公共数据集上, 图像检测框数量为0, 通过Brenner梯度函数得到的图像模糊度指标为69.28, 证明了该方法可以生成运动模糊伪装对抗样本。

关键词: 自动驾驶感知, 对抗样本, 运动模糊, 目标检测, 卷积神经网络

Abstract:

In the domain of autonomous driving perception systems, Convolutional Neural Network(CNN) plays a pivotal role as a fundamental technology in vehicle perception and decision making. However, adversarial attacks pose a substantial threat to the safety and robustness of autonomous driving systems. Contemporary approaches for adversarial example generation often directly inject adversarial perturbations into images, resulting in a degradation of visual fidelity and inadequate concealment, rendering them readily discernible to human observers. To address this challenge, this study leverages prior knowledge of motion blur induced by vehicular motion within traffic scenes and proposes a camouflaged adversarial example generation method. Adversarial examples featuring motion blur characteristics are synthesized by simulating the blurring effects inherent to vehicular and pedestrian motions. To preserve the motion blur within images while effectively executing adversarial attacks, an object-invisible adversarial loss function is formulated. The experimental findings obtained from the ICDAR public dataset demonstrate the efficacy of the proposed method in generating adversarial examples with motion blur, as evidenced by a detection box count of 0 and an image blur index of 69.28, obtained through the Brenner gradient function. These results validate the efficacy of the proposed method in producing motion blur camouflaged adversarial examples.

Key words: autonomous driving perception, adversarial example, motion blur, object detection, Convolutional Neural Network (CNN)