[1] ZHONG Y Y, DENG W H.Towards transferable adversarial attack against deep face recognition[J].IEEE Transactions on Information Forensics and Security, 2021, 16:1452-1466. [2] WANG L, CHO W, YOON K J.Deceiving image-to-image translation networks for autonomous driving with adversarial perturbations[J].IEEE Robotics and Automation Letters, 2020, 5(2):1421-1428. [3] 柴梦婷, 朱远平.生成式对抗网络研究与应用进展[J]. 计算机工程, 2019, 45(9):222-234. CHAI M T, ZHU Y P.Research and application progress of generative adversarial networks[J].Computer Engineering, 2019, 45(9):222-234.(in Chinese) [4] 陈晓楠, 胡建敏, 张本俊, 等.基于模型间迁移性的黑盒对抗攻击起点提升方法[J].计算机工程, 2021, 47(8):162-169. CHEN X N, HU J M, ZHANG B J, et al.Black box adversarial attack starting point promotion method based on mobility between models[J].Computer Engineering, 2021, 47(8):162-169.(in Chinese) [5] 黄立峰, 庄文梓, 廖泳贤, 等.一种基于进化策略和注意力机制的黑盒对抗攻击算法[J].软件学报, 2021, 32(11):3512-3529. HUANG L F, ZHUANG W Z, LIAO Y X, et al.Black-box adversarial attack method based on evolution strategy and attention mechanism[J].Journal of Software, 2021, 32(11):3512-3529.(in Chinese) [6] MAO X F, CHEN Y F, WANG S H, et al.Composite adversarial attacks[J].Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(10):8884-8892. [7] ZHENG H Z, ZHANG Z Q, GU J C, et al.Efficient adversarial training with transferable adversarial examples[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:1178-1187. [8] ZHOU W, HOU X, CHEN Y J, et al.Transferable adversarial perturbations[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2018:452-467. [9] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P.DeepFool:a simple and accurate method to fool deep neural networks[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:2574-2582. [10] CHEN P Y, ZHANG H, SHARMA Y, et al.ZOO:zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th Workshop on Artificial Intelligence and Security.New York, USA:ACM Press, 2017:15-26. [11] BRENDEL W, RAUBER J, BETHGE M.Decision-based adversarial attacks:reliable attacks against black-box machine learning models[EB/OL].[2022-02-25].https://arxiv.org/pdf/1712.04248.pdf. [12] GOODFELLOW I J, SHLENS J, SZEGEDY C.Explaining and harnessing adversarial examples[EB/OL].[2022-02-25].https://arxiv.org/pdf/1412.6572.pdf. [13] KURAKIN A, GOODFELLOW I J, BENGIO S.Adversarial examples in the physical world[EB/OL].[2022-02-25].https://arxiv.org/pdf/1607.02533.pdf. [14] DONG Y P, LIAO F Z, PANG T Y, et al.Boosting adversarial attacks with momentum[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:9185-9193. [15] XIE C H, ZHANG Z S, ZHOU Y Y, et al.Improving transferability of adversarial examples with input diversity[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:2725-2734. [16] DONG Y P, PANG T Y, SU H, et al.Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:4307-4316. [17] LIN J D, SONG C B, HE K, et al.Nesterov accelerated gradient and scale invariance for adversarial attacks[EB/OL].[2022-02-25].https://arxiv.org/abs/1908.06281. [18] MADRY A, MAKELOV A, SCHMIDT L, et al.Towards deep learning models resistant to adversarial attacks[EB/OL].[2022-02-25].https://arxiv.org/pdf/1706.06083.pdf. [19] TRAMÈR F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training:attacks and defenses[EB/OL].[2022-02-25].https://arxiv.org/pdf/1705.07204.pdf. [20] COHEN J, ROSENFELD E, KOLTER Z J.Certified adversarial robustness via randomized smoothing[EB/OL].[2022-02-25].https://arxiv.org/pdf/1902.02918.pdf. [21] CIHANG X, ZHANG Z S, YUILLE A L, et al.Mitigating adversarial effects through randomization[EB/OL].[2022-02-25].https://arxiv.org/pdf/1711.01991.pdf. [22] GUO C, RANA M, CISSÉ M, et al.Countering adversarial images using input transformations[EB/OL].[2022-02-25].https://arxiv.org/pdf/1711.00117.pdf. [23] LIAO F Z, LIANG M, DONG Y P, et al.Defense against adversarial attacks using high-level representation guided denoiser[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1778-1787. [24] GU S C, YI P, ZHU T, et al.Detecting adversarial examples in deep neural networks using normalizing filters[C]//Proceedings of the 11th International Conference on Agents and Artificial Intelligence.Prague, Czech Republic:Science and Technology Publications, 2019:164-173. [25] JIA X J, WEI X X, CAO X C, et al.ComDefend:an efficient image compression model to defend adversarial examples[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:6077-6085. [26] 赖妍菱, 石峻峰, 陈继鑫, 等.基于U-Net的对抗样本防御模型[J].计算机工程, 2021, 47(12):163-170. LAI Y L, SHI J F, CHEN J X, et al.Adversarial example defense model based on U-Net[J].Computer Engineering, 2021, 47(12):163-170.(in Chinese) [27] HU J, SHEN L, SUN G.Squeeze-and-excitation networks[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:7132-7141. [28] NASEER M, KHAN S, HAYAT M, et al.A self-supervised approach for adversarial robustness[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:259-268. [29] LIU Y P, CHEN X Y, LIU C, et al.Delving into transferable adversarial examples and black-box attacks[EB/OL].[2022-02-25].https://arxiv.org/pdf/1611.02770.pdf. [30] RUSSAKOVSKY O, DENG J, SU H, et al.ImageNet large scale visual recognition challenge[J].International Journal of Computer Vision, 2015, 115(3):211-252. [31] SZEGEDY C, VANHOUCKE V, IOFFE S, et al.Rethinking the inception architecture for computer vision[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:2818-2826. [32] SZEGEDY C, IOFFE S, VANHOUCKE V, et al.Inception-v4, inception-ResNet and the impact of residual connections on learning[J].Proceedings of the AAAI Conference on Artificial Intelligence, 2017, 31(1):4278-4284. [33] HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:770-778. |