[1] NAFEA A A, ALAMERI S A, MAJEED R R, et al. A short review on supervised machine learning and deep learning techniques in computer vision[J]. Babylonian Journal of Machine Learning, 2024, 2024: 48-55. [2] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [3] GRIGORESCU S, TRASNEA B, COCIAS T, et al. A survey of deep learning techniques for autonomous driving[J]. Journal of Field Robotics, 2020, 37(3): 362-386. [4] ZULQARNAIN G S, MIAN A. Learning from millions of 3D scans for large-scale 3D face recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE Press, 2018: 1896-1905. [5] SVNDERHAUF N, BROCK O, SCHEIRER W, et al. The limits and potentials of deep learning for robotics[J]. The International Journal of Robotics Research, 2018, 37(4/5): 405-420. [6] NAJAFABADI M M, VILLANUSTRE F, KHOSHGOFTAAR T M, et al. Deep learning applications and challenges in big data analytics[J]. Journal of Big Data, 2015, 2(1): 1. [7] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1312.6199. [8] 刘会, 赵波, 郭嘉宝, 等. 针对深度学习的对抗攻击综述[J]. 密码学报, 2021, 8(2): 202-214. LIU H, ZHAO B, GUO J B, et al. Survey on adversarial attacks towards deep learning[J]. Journal of Cryptologic Research, 2021, 8(2): 202-214. (in Chinese) [9] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1412.6572.[10] TABACOF P, VALLE E. Exploring the space of adversarial images[C]//Proceedings of the International Joint Conference on Neural Networks (IJCNN). Vancouver, Canada: IEEE Press, 2016: 426-433.[11] TANAY T, GRIFFIN L. A boundary tilting persepective on the phenomenon of adversarial examples[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1608.07690.[12] DUBE S. High dimensional spaces, deep learning and adversarial examples[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1801.00634.[13] AMSALEG L, BAILEY J, BARBE A, et al. High intrinsic dimensionality facilitates adversarial attack: theoretical evidence[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 854-865.[14] ILYAS A, SANTURKAR S, TSIPRAS D, et al. Adversarial examples are not bugs, they are features[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York, USA: ACM, 2019: 125-136.[15] 蔡秀霞, 杜慧敏. 对抗攻击及对抗样本生成方法综述[J]. 西安邮电大学学报, 2021, 26(1): 67-75. CAI X X, DU H M. Survey on adversarial examples generation and adversarial attack method[J]. Journal of Xi’an University of Posts and Telecommunications, 2021, 26(1): 67-75. (in Chinese)[16] REN K, ZHENG T H, QIN Z, et al. Adversarial attacks and defenses in deep learning[J]. Engineering, 2020, 6(3): 307-339.[17] CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the IEEE Symposium on Security and Privacy (SP). San Jose, USA: IEEE Press, 2017: 39-57.[18] ROZSA A, RUDD E M, BOULT T E. Adversarial diversity and hard positive generation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Las Vegas, USA: IEEE Press, 2016: 410-417.[19] KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial machine learning at scale[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1611.01236.[20] DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE Press, 2018: 9185-9193.[21] WANG G Q, YAN H Q, WEI X X. Enhancing transferability of adversarial examples with spatial momentum[C]//Proceedings of Chinese Conference on Pattern Recognition and Computer Vision (PRCV). Shenzhen, China: Springer, 2022: 593-604.[22] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1706.06083.[23] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of the IEEE European Symposium on Security and Privacy (EuroS &P). Washington D. C., USA: IEEE Press, 2016: 372-387.[24] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE Press, 2017: 86-94.[25] SU J W, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841.[26] ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square attack: a query-efficient black-box adversarial attack via random search[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 484-501.[27] LIU Y P, CHEN X Y, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[EB/OL].[2024-07-29]. https://arxiv.org/abs/1611.02770.[28] YUAN Z, ZHANG J, JIA Y P, et al. Meta gradient adversarial attack[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE Press, 2021: 7728-7737.[29] BALUJA S, FISCHER I. Learning to attack: adversarial transformation networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.[S. l.]: AAAI Press, 2018: 1-15.[30] XIAO C, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1801.02610.[31] DOLATABADI H M, ERFANI S, LECKIE C. AdvFlow: inconspicuous black-box adversarial attacks using normalizing flows[J]. Advances in Neural Information Processing Systems, 2020, 33: 15871-15884.[32] CHEN X Q, GAO X T, ZHAO J J, et al. AdvDiffuser: natural adversarial example synthesis with diffusion models[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France: IEEE Press, 2023: 4539-4549.[33] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]//Proceedings of the 2017 Asia Conference on Computer and Communications Security. New York, USA: ACM, 2017: 506-519.[34] MA C, CHEN L, YONG J H. Simulating unknown target models for query-efficient black-box attacks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE Press, 2021: 11830-11839.[35] XIAO C, ZHU J Y, LI B, et al. Spatially transformed adversarial examples[EB/OL].[2024-07-29]. https://arxiv.org/abs/1801.02612.[36] ATHALYE A, ENGSTROM L, ILYAS A, et al. Synthesizing robust adversarial examples[C]//Proceedings of International Conference on Machine Learning.[S. l.]: PMLR, 2018: 284-293.[37] BROWN T B, MANÉ D, ROY A, et al. Adversarial patch[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1712.09665.[38] LI J, SCHMIDT F, KOLTER Z. Adversarial camera stickers: a physical camera-based attack on deep learning systems[C]//Proceedings of International Conference on Machine Learning.[S. l.]: PMLR, 2019: 3896-3904.[39] GNANASAMBANDAM A, SHERMAN A M, CHAN S H. Optical adversarial attack[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Montreal, Canada: IEEE Press, 2021: 92-101.[40] KNIGHT W. The dark secret at the heart of AI[J]. Technology Review, 2017, 120(3): 54-61.[41] 贵向泉, 刘世清, 李立, 等. 基于改进YOLOv8的景区行人检测算法[J]. 计算机工程, 2024, 50(7): 342-351. GUI X Q, LIU, S Q, LI L, et al. Pedestrian detection algorithm for scenic spots based on improved YOLOv8[J]. Computer Engineering, 2024, 50(7): 342-351. (in Chinese)[42] 王林, 赵莉, 王无为. 高动态场景下无人机空对空目标检测[J]. 计算机工程, 2024, 50(12): 265-275. WANG L, ZHAO L, WANG W W. Air-to-air target detection of unmanned aerial vehicles under high dynamic scenarios[J]. Computer Engineering, 2024, 50(12): 265-275. (in Chinese)[43] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA: IEEE Press, 2014: 580-587.[44] GIRSHICK R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE Press, 2015: 1440-1448.[45] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.[46] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE Press, 2017: 6517-6525.[47] REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL].[2024-07-29]. https://arxiv.org/abs/1804.02767.[48] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[EB/OL].[2024-07-29]. https://arxiv.org/abs/2004.10934.[49] 崔丽群, 曹华维. 基于改进YOLOv5的遥感图像目标检测[J]. 计算机工程, 2024, 50(4): 228-236. CUI L Q, CAO H W. Target detection of remote-sensing images based on improved YOLOv5[J]. Computer Engineering, 2024, 50(4): 228-236. (in Chinese)[50] LU J, SIBAI H, FABRY E. Adversarial examples that fool detectors[EB/OL].[2024-07-29]. https://arxiv.org/pdf/1712.02494.[51] XIAO Y T, PUN C M, LIU B. Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation[J]. Pattern Recognition, 2021, 115: 107903.[52] LI Y W, XU G L, LI W L. FA: a fast method to attack real-time object detection systems[C]//Proceedings of the IEEE/CIC International Conference on Communications in China (ICCC). Chongqing, China: IEEE Press, 2020: 1268-1273.[53] WANG D R, LI C R, WEN S, et al. Daedalus: breaking nonmaximum suppression in object detection via adversarial examples[J]. IEEE Transactions on Cybernetics, 2022, 52(8): 7427-7440.[54] SHAPIRA A, ZOLFI A, DEMETRIO L, et al. Phantom sponges: exploiting non-maximum suppression to attack deep object detectors[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE Press, 2023: 4560-4569.[55] LI Y, TIAN D, CHANG M C, et al. Robust adversarial perturbation on deep proposal-based models[EB/OL].[2024-07-29]. https://arxiv.org/abs/1809.05962.[56] LEE M, KOLTER Z. On physical adversarial patches for object detection[EB/OL].[2024-07-29]. https://arxiv.org/abs/1906.11897.[57] XU K D, ZHANG G Y, LIU S J, et al. Adversarial T-shirt! Evading person detectors in a physical world[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 665-681.[58] ZHANG Y, GONG Z, ZHANG Y C, et al. Transferable physical attack against object detection with separable attention[EB/OL].[2024-07-29]. https://arxiv.org/abs/2205.09592.[59] YANG K C, TSAI T, YU H G, et al. Beyond digital domain: fooling deep learning based recognition system in physical world[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(1): 1088-1095.[60] HU Y C T, CHEN J C, KUNG B H, et al. Naturalistic physical adversarial patch for object detectors[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE Press, 2021: 7828-7837.[61] GUESMI A, BILASCO I M, SHAFIQUE M, et al. AdvART: adversarial art for camouflaged object detection attacks[EB/OL].[2024-07-29]. https://arxiv.org/pdf/2303.01734.[62] LIU A S, LIU X L, FAN J X, et al. Perceptual-sensitive GAN for generating adversarial patches[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 1028-1035.[63] DUAN R J, MA X J, WANG Y S, et al. Adversarial camouflage: hiding physical-world attacks with natural styles[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE Press, 2020: 997-1005.[64] WANG J K, LIU A S, YIN Z X, et al. Dual attention suppression attack: generate adversarial camouflage in physical world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE Press, 2021: 8561-8570.[65] ZHONG Y Q, LIU X M, ZHAI D M, et al. Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE Press, 2022: 15324-15333.[66] DUAN R J, MAO X F, QIN A K, et al. Adversarial laser beam: effective physical-world attack to DNNs in a blink[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE Press, 2021: 16057-16066. [67] HU Z H, HUANG S Y, ZHU X P, et al. Adversarial texture for fooling person detectors in the physical world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE Press, 2022: 13297-13306.[68] HU Z H, CHU W D, ZHU X P, et al. Physically realizable natural-looking clothing textures evade person detectors via 3D modeling[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE Press, 2023: 16975-16984.[69] ZHU X P, HU Z H, HUANG S Y, et al. Infrared invisible clothing: hiding from infrared detectors at multiple angles in real world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE Press, 2022: 13307-13316.[70] SURYANTO N, KIM Y, KANG H, et al. DTA: physical camouflage attacks using differentiable transformation network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE Press, 2022: 15284-15293.[71] SURYANTO N, KIM Y, LARASATI H T, et al. ACTIVE: towards highly transferable 3D physical camouflage for universal and robust vehicle evasion[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France: IEEE Press, 2023: 4282-4291.[72] TOHEED A, YOUSAF M H, RABNAWAZ, et al. Physical adversarial attack scheme on object detectors using 3D adversarial object[C]//Proceedings of the 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2). Rawalpindi, Pakistan: IEEE Press, 2022: 1-4.[73] ZOLFI A, KRAVCHIK M, ELOVICI Y, et al. The translucent patch: a physical and universal attack on object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE Press, 2021: 15227-15236.[74] 王志波, 王雪, 马菁菁, 等. 面向计算机视觉系统的对抗样本攻击综述[J]. 计算机学报, 2023, 46(2): 436-468. WANG Z B, WANG X, MA J J, et al. A review of adversarial sample attacks for computer vision systems[J]. Journal of Computers, 2023, 46(2): 436-468. (in Chinese)[75] ALPARSLAN Y, ALPARSLAN K, KEIM-SHENK J, et al. Adversarial attacks on convolutional neural networks in facial recognition domain[EB/OL].[2024-07-29]. https://arxiv.org/abs/2001.11137.[76] DABOUEI A, SOLEYMANI S, DAWSON J, et al. Fast geometrically-perturbed adversarial faces[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Waikoloa Village, USA: IEEE Press, 2019: 1979-1988.[77] LIN C Y, CHEN F J, NG H F, et al. Invisible adversarial attacks on deep learning-based face recognition models[J]. IEEE Access, 2023, 11: 51567-51577.[78] HUSSAIN S, HUSTER T, MESTERHARM C, et al. Reface: real-time adversarial attacks on face recognition systems[EB/OL].[2024-07-29]. https://arxiv.org/abs/2206.04783.[79] DEB D, ZHANG J B, JAIN A K. AdvFaces: adversarial face synthesis[C]//Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Houston, USA: IEEE Press, 2020: 1-10.[80] ZHONG Y Y, DENG W H. Towards transferable adversarial attack against deep face recognition[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 1452-1466.[81] SHEN M, YU H, ZHU L H, et al. Effective and robust physical-world attacks on deep learning face recognition systems[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 4063-4077.[82] KOMKOV S, PETIUSHKO A. AdvHat: real-world adversarial attack on ArcFace face ID system[C]//Proceedings of the 25th International Conference on Pattern Recognition (ICPR). Milan, Italy: IEEE Press, 2021: 819-826.[83] ZHENG X, FAN Y B, WU B Y, et al. Robust physical-world attacks on face recognition[J]. Pattern Recognition, 2023, 133: 109009.[84] SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM, 2016: 1528-1540.[85] GONG H H, DONG M J, MA S Q, et al. Stealthy physical masked face recognition attack via adversarial style optimization[J]. IEEE Transactions on Multimedia, 2024, 26: 5014-5025.[86] WEI X, GUO Y, YU J. Adversarial sticker: a stealthy attack method in the physical world[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 2711-2725.[87] YANG X, LIU C, XU L L, et al. Towards effective adversarial textured 3D meshes on physical face recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE Press, 2023: 4119-4128.[88] YIN B J, WANG W X, YAO T P, et al. Adv-makeup: a new imperceptible and transferable attack on face recognition[EB/OL].[2024-07-29]. https://arxiv.org/abs/2105.03162.[89] NGUYEN D L, ARORA S S, WU Y H, et al. Adversarial light projection attacks on face recognition systems: a feasibility study[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, USA: IEEE Press, 2020: 3548-3556.[90] ZHOU Z, TANG D, WANG X F, et al. Invisible mask: practical attacks on face recognition with infrared[EB/OL].[2024-07-29]. https://arxiv.org/abs/1803.04683.[91] YAN B, WANG D, LU H C, et al. Cooling-shrinking attack: blinding the tracker with imperceptible noises[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE Press, 2020: 987-996.[92] CHEN X S, YAN X Y, ZHENG F, et al. One-shot adversarial attacks on visual tracking with dual attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE Press, 2020: 10173-10182. [93] CHEN X S, FU C M, ZHENG F, et al. A unified multi-scenario attacking network for visual object tracking[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(2): 1097-1104.[94] LIN D L, CHEN Q, ZHOU C Y, et al. Tracklet-switch and imperceivable adversarial attack against pedestrian multi-object tracking trackers[J]. Applied Soft Computing, 2024, 162: 111860.[95] JIA Y H, LU Y T, SHEN J J, et al. Fooling detection alone is not enough: adversarial attack against multiple object tracking[EB/OL].[2024-07-29]. https://arxiv.org/abs/1905.11026v2.[96] WIYATNO R, XU A Q. Physical adversarial textures that fool visual object tracking[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE Press, 2019: 4821-4830.[97] DING L, WANG Y W, YUAN K W, et al. Towards universal physical attacks on single object tracking[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(2): 1236-1245.[98] WONG A, CICEK S, SOATTO S. Targeted adversarial perturbations for monocular depth prediction[J]. Advances in Neural Information Processing Systems, 2020, 33: 8486-8497.[99] GUESMI A, HANIF M A, ALOUANI I, et al. APARATE: adaptive adversarial patch for CNN-based monocular depth estimation for autonomous navigation[EB/OL].[2024-07-29]. https://arxiv.org/abs/2303.01351. [100] CHENG Z Y, LIANG J, CHOI H, et al. Physical attack on monocular depth estimation with optimal adversarial patches[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2022: 514-532. [101] GUESMI A, HANIF M A, OUNI B, et al. SAAM: stealthy adversarial attack on monocular depth estimation[J]. IEEE Access, 2024, 12: 13571-13585. [102] DAIMO R, ONO S. Projection-based physical adversarial attack for monocular depth estimation[J]. IEICE Transactions on Information and Systems, 2023, 106(1): 31-35. [103] RANJAN A, JANAI J, GEIGER A, et al. Attacking optical flow[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE Press, 2019: 2404-2413. [104] YAMANAKA K, TAKAHASHI K, FUJII T, et al. Simultaneous attack on CNN-based monocular depth estimation and optical flow estimation[J]. IEICE Transactions on Information and Systems, 2021, 104(5): 785-788. [105] 赵宏, 常有康, 王伟杰. 深度神经网络的对抗攻击及防御方法综述[J]. 计算机科学, 2022, 49(S2): 662-672. ZHAO H, CHANG Y K, WANG W J. et al. Survey of adversarial and defense methods for deep neural networks[J]. Computer Science, 2022, 49(S2): 662-672. (in Chinese) [106] CHEN E C, LEE C R. Data filtering for efficient adversarial training[J]. Pattern Recognition, 2024, 151: 110394. [107] 冯妍舟, 刘建霞, 王海翼, 等. 基于多级残差信息蒸馏的真实图像去噪方法[J]. 计算机工程, 2024, 50(3): 216-223. FENG Y Z, LIU J X, WANG H Y, et al. Real image denoising method based on multi-level residual information distillation[J]. Computer Engineering, 2024, 50(3): 216-223. (in Chinese) [108] CHEN Y, LI X D, HU P, et al. DifFilter: defending against adversarial perturbations with diffusion filter[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 6779-6794. [109] HUANG J C, DAI Y Y, LU F, et al. Adversarial perturbation denoising utilizing common characteristics in deep feature space[J]. Applied Intelligence, 2024, 54(2): 1672-1690. [110] XIE C, WANG J, ZHANG Z, et al. Mitigating adversarial effects through randomization[EB/OL].[2024-07-29]. https://arxiv.org/abs/1711.01991. [111] YIN Z X, WANG H, WANG J, et al. Defense against adversarial attacks by low-level image transformations[J]. International Journal of Intelligent Systems, 2020, 35(10): 1453-1466. [112] FREITAS S, CHEN S T, WANG Z J, et al. UnMask: adversarial detection and defense through robust feature alignment[C]//Proceedings of the IEEE International Conference on Big Data (Big Data). Atlanta, USA: IEEE Press, 2020: 1081-1088. [113] ABUSNAINA A, WU Y H, ARORA S, et al. Adversarial example detection using latent neighborhood graph[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE Press, 2021: 7667-7676. [114] LIU H, ZHAO B, GUO J B, et al. A lightweight unsupervised adversarial detector based on autoencoder and isolation forest[J]. Pattern Recognition, 2024, 147: 110127. [115] WU Y H, ARORA S S, WU Y H, et al. Beating attackers at their own games: adversarial example detection using adversarial gradient directions[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 2969-2977. [116] ZHANG H C, WANG J Y. Towards adversarially robust object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea: IEEE Press, 2019: 421-430. [117] LIU J, LEVINE A, LAU C P, et al. Segment and complete: defending object detectors against adversarial patch attacks with robust patch detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE Press, 2022: 14953-14962. [118] ZHOU J, LIANG C, CHEN J. Manifold projection for adversarial defense on face recognition[C]//Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020: 288-305. [119] ZHU C C, LI X Q, LI J D, et al. Improving robustness of facial landmark detection by defending against adversarial attacks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE Press, 2021: 11731-11740. [120] JIA S, MA C, SONG Y B, et al. Robust tracking against adversarial attacks[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 69-84. [121] WU Z, YU R, LIU Q, et al. Enhancing tracking robustness with auxiliary adversarial defense networks[EB/OL].[2024-07-29]. https://arxiv.org/abs/2402.17976. [122] ANAND A P, GOKUL H, SRINIVASAN H, et al. Adversarial patch defense for optical flow networks in video action recognition[C]//Proceedings of the 19th IEEE International Conference on Machine Learning and Applications (ICMLA). Miami, USA: IEEE Press, 2020: 1289-1296. [123] SCHEURER E, SCHMALFUSS J, LIS A, et al. Detection defenses: an empty promise against adversarial patch attacks on optical flow[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, USA: IEEE Press, 2024: 6475-6484. |