[1]Nafea A A, Alameri S A, Majeed R R, et al. A Short Review on
Supervised Machine Learning and Deep Learning Techniques
in Computer Vision[J]. Babylonian Journal of Machine
Learning, 2024, 2024: 48-55.
[2]Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification
with deep convolutional neural networks[J]. Communications
of the ACM, 2017, 60(6): 84-90
[3]Grigorescu S, Trasnea B, Cocias T, et al. A survey of deep
learning techniques for autonomous driving[J]. Journal of Field
Robotics, 2020, 37(3): 362-386.
[4]Gilani S Z, Mian A. Learning from millions of 3D scans for
large-scale 3D face recognition[C]//Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition. 2018:
1896-1905.
[5]Sünderhauf N, Brock O, Scheirer W, et al. The limits and
potentials of deep learning for robotics[J]. The International
journal of robotics research, 2018, 37(4-5): 405-420.
[6]Najafabadi M M, Villanustre F, Khoshgoftaar T M, et al. Deep
learning applications and challenges in big data analytics[J].
Journal of big data, 2015, 2(1): 1-21.
[7]Szegedy C, Zaremba W, Sutskever I, et al. Intriguing prop
erties of neural networks[EB/OL].[2024-07-29].https://arxiv.o
rg/pdf/1312.6199
[8]Goodfellow I J, Shlens J, Szegedy C. Explaining and harn
essing adversarial examples[EB/OL].[2024-07-29]. https://ar
xiv.org/pdf/1412.6572
[9]Brown T B, Mané D, Roy A, et al. Adversarial patch[EB/
OL].[2024-07-29].https://arxiv.org/pdf/1712.09665
[10]Zhong Y, Liu X, Zhai D, et al. Shadows can be dangerous:
Stealthy and effective physical-world adversarial attack by
natural
phenomenon[C]//Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition. 2022:
15345-15354.
[11]LIU Hui, ZHAO Bo, GUO Jiabao, et al. Survey on Adversarial
Attacks Towards Deep Learning[J]. Journal of Cryptologic
Research,2021,8(02):202-214.DOI:10.13868/j.cnki.jcr.000431
刘会,赵波,郭嘉宝等.针对深度学习的对抗攻击综述[J].密码
学报,2021,8(02):202-214.DOI:10.13868/j.cnki.jcr.000431
[12]CAI Xiuxia, DU Huimin. Survey on Adversarial examples
generation and Adversarial Attack Method[J]. Journal of X
i’an University of Posts and Telecommunications. ,2021,26
(01):67-75.DOI:10.13682/j.issn.2095-6533.2021.01.011
蔡秀霞,杜慧敏.对抗攻击及对抗样本生成方法综述[J].西安
邮电大学学报,2021,26(01):67-75.DOI:10.13682/j.issn.2095-6
533.2021.01.011
[13]REN Kui, Tianhang Zheng, QIN Zhan.et.al. Adversarial Att
acks and Defenses in Deep Learning[J].Engineering,2020,6
(03):307-339.
任奎,Tianhang Zheng, 秦湛等.深度学习中的对抗性攻击和
防御[J].Engineering,2020,6(03):307-339.
[14]Tabacof P, Valle E. Exploring the space of adversarial
images[C]//2016 international joint conference on neural
networks (IJCNN). IEEE, 2016: 426-433.
[15]Tanay T, Griffin L. A boundary tilting persepective on the
phenomenon of adversarial examples [EB/OL].[2024-07-29]https://arxiv.org/pdf/1608.07690
[16]Dube S. High dimensional spaces, deep learning and adversarial
examples[EB/OL].[2024-07-29].https://arxiv.org/pdf/1801.006
34
[17]Amsaleg L, Bailey J, Barbe A, et al. High intrinsic
dimensionality facilitates adversarial attack: Theoretical
evidence[J]. IEEE Transactions on Information Forensics and
Security, 2020, 16: 854-865.
[18]Ilyas A, Santurkar S, Tsipras D, et al. Adversarial examples are
not bugs, they are features[J]. Advances in neural information
processing systems, 2019, 32.
[19]Carlini N, Wagner D. Towards evaluating the robustness of
neural networks[C]//2017 ieee symposium on security and
privacy (sp). Ieee, 2017: 39-57.
[20]Rozsa A, Rudd E M, Boult T E. Adversarial diversity and hard
positive generation[C]//Proceedings of the IEEE conference on
computer vision and pattern recognition workshops. 2016:
25-32.
[21]Kurakin A, Goodfellow I, Bengio S. Adversarial machine l
earning at scale[EB/OL].[2024-07-29].https://arxiv.org/pdf/16
11.01236
[22]Dong Y, Liao F, Pang T, et al. Boosting adversarial attacks with
momentum[C]//Proceedings of the IEEE conference on
computer vision and pattern recognition. 2018: 9185-9193.
[23]Wang G, Yan H, Wei X. Enhancing transferability of adversarial
examples with spatial momentum[C]//Chinese Conference on
Pattern Recognition and Computer Vision (PRCV). Cham:
Springer International Publishing, 2022: 593-604.
[24]Mądry A, Makelov A, Schmidt L, et al. Towards deep learning
models resistant to adversarial attacks[J]. stat, 2017, 1050(9).
[25]Papernot N, McDaniel P, Jha S, et al. The limitations of deep
learning in adversarial settings[C]//2016 IEEE European
symposium on security and privacy (EuroS&P). IEEE, 2016:
372-387.
[26]Moosavi-Dezfooli S M, Fawzi A, Fawzi O, et al.Universal
adversarial perturbations[C]//Proceedings of the IEEE conf
erence on computer vision and pattern recognition. 2017:
1765-1773.
[27]Su J, Vargas D V, Sakurai K. One pixel attack for fooling deep
neural networks[J]. IEEE Transactions on Evolutionary
Computation, 2019, 23(5): 828-841.
[28]Andriushchenko M, Croce F, Flammarion N, et al. Square attack:
a query-efficient black-box adversarial attack via random
search[C]//European conference on computer vision. Cham:
Springer International Publishing, 2020: 484-501.
[29]Liu Y, Chen X, Liu C, et al. Delving into transferable adv
ersarial examples and black-box attacks[EB/OL].[2024-07-2
9] https://arxiv.org/abs/1611.02770
[30]Yuan Z, Zhang J, Jia Y, et al. Meta gradient adversarial
attack[C]//Proceedings
of the IEEE/CVF International
Conference on Computer Vision. 2021: 7748-7757.
[31]Baluja S, Fischer I. Learning to attack: Adversarial
transformation networks[C]//Proceedings of the AAAI
Conference on Artificial Intelligence. 2018, 32(1).
[32]Xiao C, Li B, Zhu J Y, et al. Generating adversarial exam
ples with adversarial networks[EB/OL].[2024-07-29].https://a
rxiv.org/pdf/1801.02610
[33]Mohaghegh Dolatabadi H, Erfani S, Leckie C. Advflow:
Inconspicuous black-box adversarial attacks using normalizing
flows[J]. Advances in Neural Information Processing Systems,
2020, 33: 15871-15884.
[34]Papernot N, McDaniel P, Goodfellow I, et al. Practical
black-box attacks against machine learning[C]//Proceedings of
the 2017 ACM on Asia conference on computer and
communications security. 2017: 506-519.
[35]Ma C, Chen L, Yong J H. Simulating unknown target models for
query-efficient black-box attacks[C]//Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition. 2021: 11835-11844.
[36]Xiao C, Zhu J Y, Li B, et al. Spatially transformed adversarial
examples[EB/OL].[2024-07-29].https://arxiv.org/abs/1801.026
12
[37]Athalye A, Engstrom L, Ilyas A, et al. Synthesizing robust
adversarial examples[C]//International conference on machine
learning. PMLR, 2018: 284-293.
[38]Brown T B, Mané D, Roy A, et al. Adversarial patch[EB/
OL].[2024-07-29].https://arxiv.org/pdf/1712.09665
[39]Wei X, Guo Y, Yu J. Adversarial sticker: A stealthy attack
method in the physical world[J]. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 2022, 45(3): 2711-2725.
[40]Li J, Schmidt F, Kolter Z. Adversarial camera stickers: A
physical camera-based attack on deep learning systems[C]//
International Conference on Machine Learning. PMLR, 201
9: 3896-3904. [41]Zolfi A, Kravchik M, Elovici Y, et al. The translucent patc
h: A physical and universal attack on object detectors[C]//
Proceedings of the IEEE/CVF Conference on Computer Vi
sion and Pattern Recognition. 2021: 15232-15241.
[42]Gnanasambandam A, Sherman A M, Chan S H. Optical
adversarial
attack[C]//Proceedings
of
the
IEEE/CVF
International Conference on Computer Vision. 2021: 92-101.
[43]Duan R, Mao X, Qin A K, et al. Adversarial laser beam:
Effective
physical-world
attack
to
dnns
in
a
blink[C]//Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition. 2021: 16062-16071.
[44]KNIGHT W. The Dark Secret at the Heart of AI[J]. Technology
Review, 2017, 120(3):54 – 61.
[45]Girshick R, Donahue J, Darrell T, et al. Rich feature hiera
rchies for accurate object detection and semantic segmentat
ion[C]//Proceedings of the IEEE conference on computer v
ision and pattern recognition. 2014: 580-587.
[46]Girshick R. Fast r-cnn[C]//Proceedings of the IEEE
international conference on computer vision. 2015: 1440-1448.
[47]Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time
object detection with region proposal networks[J]. Advances in
neural information processing systems, 2015, 28.
[48]Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]
//Proceedings of the IEEE conference on computer vision
and pattern recognition. 2017: 7263-7271.
[49]Redmon J, Farhadi A. Yolov3: An incremental improvemen
t[EB/OL].[2024-07-29].https://arxiv.org/abs/1804.02767
[50]Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal
speed and accuracy of object detection[EB/OL].[2024-07-2
9]. https://arxiv.org/abs/2004.10934
[51]Lu J, Sibai H, Fabry E. Adversarial examples that fool
detectors[EB/OL].[2024-07-29].https://arxiv.org/pdf/1712.0249
4
[52]Xiao Y, Pun C M, Liu B. Fooling deep neural detection
networks
with
adaptive
object-oriented
adversarial
perturbation[J]. Pattern Recognition, 2021, 115: 107903.
[53]Wang D, Li C, Wen S, et al. Daedalus: Breaking nonmaximum
suppression in object detection via adversarial examples[J].
IEEE Transactions on Cybernetics, 2021, 52(8): 7427-7440.
[54]Shapira A, Zolfi A, Demetrio L, et al. Phantom Sponges:
Exploiting Non-Maximum Suppression to Attack Deep Object
Detectors[C]//Proceedings
of
the
IEEE/CVF Winter
Conference on Applications of Computer Vision. 2023:
4571-4580.
[55]Li Y, Tian D, Chang M C, et al. Robust adversarial pertur
bation on deep proposal-based models[EB/OL].[2024-07-29].
https://arxiv.org/abs/1809.05962
[56]Lee M, Kolter Z. On physical adversarial patches for objec
t detection[EB/OL].[2024-07-29].https://arxiv.org/abs/1906.11
897
[57]Xu K, Zhang G, Liu S, et al. Adversarial t-shirt! evading person
detectors in a physical world[C]//Computer Vision–ECCV
2020: 16th European Conference, Glasgow, UK, August 23–28,
2020, Proceedings, Part V 16. Springer International Publishing,
2020: 665-681.
[58]Zhang Y, Gong Z, Zhang Y, et al. Transferable physical at
tack against object detection with separable attention[EB/O
L].[2024-07-29].https://arxiv.org/abs/2205.09592
[59]Yang K, Tsai T, Yu H, et al. Beyond digital domain: Fooling
deep learning based recognition system in physical
world[C]//Proceedings of the AAAI Conference on Artificial
Intelligence. 2020, 34(01): 1088-1095.
[60]Hu Y C T, Kung B H, Tan D S, et al. Naturalistic physical
adversarial patch for object detectors[C]//Proceedings of the
IEEE/CVF International Conference on Computer Vision. 2021:
7848-7857.
[61]Guesmi A, Bilasco I M, Shafique M, et al. AdvART: Adv
ersarial Art for Camouflaged Object Detection Attacks[EB/
OL].[2024-07-29]. https://arxiv.org/pdf/2303.01734
[62]Liu A, Liu X, Fan J, et al. Perceptual-sensitive gan for
generating adversarial patches[C]//Proceedings of the AAAI
conference on artificial intelligence. 2019, 33(01): 1028-1035.
[63]Duan R, Ma X, Wang Y, et al. Adversarial camouflage: Hiding
physical-world attacks with natural styles[C]//Proceedings of
the IEEE/CVF conference on computer vision and pattern
recognition. 2020: 1000-1008.
[64]Wang J, Liu A, Yin Z, et al. Dual attention suppression at
tack: Generate adversarial camouflage in physical world[C]
//Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. 2021: 8565-8574.
[65]Hu Z, Huang S, Zhu X, et al. Adversarial texture for fooling
person detectors in the physical world[C]//Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition. 2022: 13307-13316. [66]Hu Z, Chu W, Zhu X, et al. Physically Realizable Natural
Looking Clothing Textures Evade Person Detectors via 3D
Modeling[C]//Proceedings of the IEEE/CVF Conference o
n Computer Vision and Pattern Recognition. 2023: 16975
16984.
[67]Zhu X, Hu Z, Huang S, et al. Infrared invisible clothing: Hiding
from infrared detectors at multiple angles in real
world[C]//Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition. 2022: 13317-13326.
[68]Suryanto N, Kim Y, Kang H, et al. Dta: Physical camoufla
ge attacks using differentiable transformation network[C]//P
roceedings of the IEEE/CVF Conference on Computer Visi
on and Pattern Recognition. 2022: 15305-15314.
[69]Suryanto N, Kim Y, Larasati H T, et al. ACTIVE: Towards
Highly Transferable 3D Physical Camouflage for Universal and
Robust Vehicle Evasion[C]//Proceedings of the IEEE/CVF
International
4305-4314.
Conference on Computer Vision. 2023:
[70]Toheed A, Yousaf M H, Javed A. Physical adversarial atta
ck scheme on object detectors using 3D adversarial object
[C]//2022 2nd International Conference on Digital Futures
and Transformative Technologies (ICoDT2). IEEE, 2022: 1-4.
[71]Alparslan Y, Alparslan K, Keim-Shenk J, et al. Adversarial
attacks on convolutional neural networks in facial recogni
tion domain[EB/OL].[2024-07-29].https://arxiv.org/abs/2001.1
1137
[72]Dabouei A, Soleymani S, Dawson J, et al. Fast geometrica
lly-perturbed adversarial faces[C]//2019 IEEE Winter Confe
rence on Applications of Computer Vision (WACV). IEEE,
2019: 1979-1988.
[73]Lin C Y, Chen F J, Ng H F, et al. Invisible Adversarial
Attacks on Deep Learning-based Face Recognition Models
[J]. IEEE Access, 2023.
[74]Hussain S, Huster T, Mesterharm C, et al. Reface: Real-ti
me adversarial attacks on face recognition systems[EB/OL].
[2024-07-29].https://arxiv.org/abs/2206.04783
[75]Deb D, Zhang J, Jain A K. Advfaces: Adversarial face syn
thesis[C]//2020 IEEE International Joint Conference on Bio
metrics (IJCB). IEEE, 2020: 1-10.
[76]Zhong Y, Deng W. Towards transferable adversarial attack
against deep face recognition[J]. IEEE Transactions on
Information Forensics and Security, 2020, 16: 1452-1466.
[77]Shen M, Yu H, Zhu L, et al. Effective and robust physical-world
attacks on deep learning face recognition systems[J]. IEEE
Transactions on Information Forensics and Security, 2021, 16:
4063-4077.
[78]Komkov S, Petiushko A. Advhat: Real-world adversarial attack
on arcface face id system[C]//2020 25th International
Conference on Pattern Recognition (ICPR). IEEE, 2021:
819-826.
[79]Zheng X, Fan Y, Wu B, et al. Robust physical-world attacks on
face recognition[J]. Pattern Recognition, 2023, 133: 109009.
[80]Sharif M, Bhagavatula S, Bauer L, et al. Accessorize to a crime:
Real
and stealthy attacks on state-of-the-art face
recognition[C]//Proceedings of the 2016 acm sigsac conference
on computer and communications security. 2016: 1528-1540.
[81]Gong H, Dong M, Ma S, et al. Stealthy Physical Masked Face
Recognition Attack via Adversarial Style Optimization[J].
IEEE Transactions on Multimedia, 2023.
[82]Yang X, Liu C, Xu L, et al. Towards Effective Adversarial
Textured 3D Meshes on Physical Face Recognition[C]//Pr
oceedings of the IEEE/CVF Conference on Computer Visi
on and Pattern Recognition. 2023: 4119-4128.
[83]Yin B, Wang W, Yao T, et al. Adv-makeup: A new imper
ceptible and transferable attack on face recognition[EB/OL].
[2024-07-29].https://arxiv.org/abs/2105.03162
[84]Nguyen D L, Arora S S, Wu Y, et al. Adversarial light projection
attacks
on face recognition systems: A feasibility
study[C]//Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition workshops. 2020:
814-815.
[85]Zhou Z, Tang D, Wang X, et al. Invisible mask: Practical
attacks on face recognition with infrared[EB/OL].[2024-07
29]. https://arxiv.org/abs/1803.04683
[86]Yan B, Wang D, Lu H, et al. Cooling-shrinking attack: Blinding
the tracker with imperceptible noises[C]//Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition. 2020: 990-999.
[87]Chen X, Yan X, Zheng F, et al. One-shot adversarial attacks on
visual tracking with dual attention[C]//Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition. 2020: 10176-10185.
[88]Chen X, Fu C, Zheng F, et al. A unified multi-scenario attacking
network for visual object tracking[C]//Proceedings of the AAAI
Conference on Artificial Intelligence. 2021, 35(2): 1097-1104.
[89]Lin D, Chen Q, Zhou C, et al. Tracklet-Switch and Imperc
eivable Adversarial Attack Against Pedestrian Multi-Object
Tracking Trackers[J]. Available at SSRN 4697428.
[90]Jia Y J, Lu Y, Shen J, et al. Fooling detection alone is no
t
enough: Adversarial attack against multiple object trackin
g[C]//International Conference on Learning Representations
(ICLR'20). 2020.
[91]Wiyatno R R, Xu A. Physical adversarial textures that fool
visual object tracking[C]//Proceedings of the IEEE/CVF I
nternational Conference on Computer Vision. 2019: 4822-4
831.
[92]Ding L, Wang Y, Yuan K, et al. Towards universal physical
attacks on single object tracking[C]//Proceedings of the AAAI
Conference on Artificial Intelligence. 2021, 35(2): 1236-1245.
[93]Wong A, Cicek S, Soatto S. Targeted adversarial perturbations
for
monocular depth prediction[J]. Advances in neural
information processing systems, 2020, 33: 8486-8497.
[94]Guesmi A, Hanif M A, Alouani I, et al. APARATE: Adapt
ive Adversarial Patch for CNN-based Monocular Depth Es
timation for Autonomous Navigation[EB/OL].[2024-07-29].
https://arxiv.org/abs/2303.01351
[95]Cheng Z, Liang J, Choi H, et al. Physical attack on monocular
depth estimation with optimal adversarial patches[C]//European
Conference on Computer Vision. Cham: Springer Nature
Switzerland, 2022: 514-532.
[96]Daimo R, Ono S. Projection-Based Physical Adversarial Attack
for Monocular Depth Estimation[J]. IEICE TRANSACTIONS
on Information and Systems, 2023, 106(1): 31-35.
[97]Ranjan A, Janai J, Geiger A, et al. Attacking optical flow
[C]//Proceedings of the IEEE/CVF International Conference
on Computer Vision. 2019: 2404-2413.
[98]Yamanaka K, Takahashi K, Fujii T, et al. Simultaneous attack on
CNN-based monocular depth estimation and optical flow
estimation[J]. IEICE TRANSACTIONS on Information and
Systems, 2021, 104(5): 785-788.
[99]Zhao Hong, Chang Youkang, Wang Weijie.et al. Survey of
Adversarial and Defense Methods for Deep Neural Netwo
rks[J].Computer Science, 2022,49(S2):662-672.
赵宏,常有康,王伟杰.深度神经网络的对抗攻击及防御方法
综述[J].计算机科学,2022,49(S2):662-672.
[100]Simonyan K, Zisserman A. Very deep convolutional networ
ks for large-scale image recognition[EB/OL].[2024-07-29].
https://arxiv.org/abs/1409.1556
[101]Guesmi A, Hanif M A, Ouni B, et al. Saam: Stealthy adversarial
attack on monocular depth estimation[J]. IEEE Access, 2024.
[102]Li Y, Xu G, Li W. Fa: a fast method to attack real-time object
detection systems[C]//2020 IEEE/CIC International Conference
on Communications in China (ICCC). IEEE, 2020: 1268-1273
[103]Guesmi A, Hanif M A, Ouni B, et al. Physical adversarial
attacks for camera-based smart systems: Current trends,
categorization, applications, research challenges, and future
outlook[J]. IEEE Access, 2023.
[104]Wang Zhibo, Wang Xue, Ma Jingjing,et al. A review of ad
versarial sample attacks for computer vision systems[J]. Jo
urnal of Computers,2023,46(02):436-468.
王志波,王雪,马菁菁,等.面向计算机视觉系统的对抗样本攻
击综述[J].计算机学报,2023,46(02):436-468.
[105]Chen E C, Lee C R. Data filtering for efficient adversarial
training[J]. Pattern Recognition, 2024, 151: 110394.
[106]Chen Y, Li X, Wang X, et al. DifFilter: Defending Against
Adversarial Perturbations with Diffusion Filter[J]. IEEE
Transactions on Information Forensics and Security, 2024.
[107]Huang J, Dai Y, Lu F, et al. Adversarial perturbation denoising
utilizing common characteristics in deep feature space[J].
Applied Intelligence, 2024, 54(2): 1672-1690.
[108]Xie C, Wang J, Zhang Z, et al. Mitigating adversarial effe
cts through randomization[EB/OL].[2024-07-29]. https://arxi
v.org/abs/1711.01991
[109]Yin Z, Wang H, Wang J, et al. Defense against adversarial
attacks by low‐level image transformations[J]. International
Journal of Intelligent Systems, 2020, 35(10): 1453-1466.
[110]Freitas S, Chen S T, Wang Z J, et al. Unmask: Adversaria
l
detection and defense through robust feature alignment[C]
//2020 IEEE International Conference on Big Data (Big D
ata). IEEE, 2020: 1081-1088.
[111]Abusnaina A, Wu Y, Arora S, et al. Adversarial example
detection using latent neighborhood graph[C]//Proceedings of
the IEEE/CVF International Conference on Computer Vision.
2021: 7687-7696.
[112]Liu H, Zhao B, Guo J, et al. A lightweight unsupervised
adversarial detector based on autoencoder and isolation
forest[J]. Pattern Recognition, 2024, 147: 110127. [113]Wu Y, Arora S S, Wu Y, et al. Beating attackers at their own
games: Adversarial example detection using adversarial
gradient directions[C]//Proceedings of the AAAI Conference
on Artificial Intelligence. 2021, 35(4): 2969-2977
[114]Zhang H, Wang J. Towards adversarially robust object dete
ction[C]//Proceedings of the IEEE/CVF International Confe
rence on Computer Vision. 2019: 421-430.
[115]Liu J, Levine A, Lau C P, et al. Segment and complete:
Defending object detectors against adversarial patch attacks
with robust patch detection[C]//Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition. 2022:
14973-14982.
[116]Zhou J, Liang C, Chen J. Manifold projection for adversarial
defense on face recognition[C]//Computer Vision–ECCV 2020:
16th European Conference, Glasgow, UK, August 23–28, 2020,
Proceedings, Part XXX 16. Springer International Publishing,
2020: 288-305.
[117]Zhu C, Li X, Li J, et al. Improving robustness of facial la
ndmark detection by defending against adversarial attacks
[C]//Proceedings of the IEEE/CVF international conference
on computer vision. 2021: 11751-11760.
[118]Jia S, Ma C, Song Y, et al. Robust tracking against adversarial
attacks[C]//Computer Vision–ECCV 2020: 16th European
Conference, Glasgow, UK, August 23–28, 2020, Proceedings,
Part XIX 16. Springer International Publishing, 2020: 69-84.
[119]Wu Z, Yu R, Liu Q, et al. Enhancing Tracking Robustness
with Auxiliary Adversarial Defense Networks[EB/OL].[202
4-07-29]. https://arxiv.org/abs/2402.17976
[120]Anand A P, Gokul H, Srinivasan H, et al. Adversarial patch
defense for optical flow networks in video action
recognition[C]//2020 19th IEEE International Conference on
Machine Learning and Applications (ICMLA). IEEE, 2020:
1289-1296.
[121]Scheurer E, Schmalfuss J, Lis A, et al. Detection defenses: An
empty promise against adversarial patch attacks on optical
flow[C]//Proceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision. 2024: 6489-6498.
[122]Chen X, Gao X, Zhao J, et al. Advdiffuser: Natural adversarial
example synthesis with diffusion models[C]//Proceedings of
the IEEE/CVF International Conference on Computer Vision.
2023: 4562-4572.
[123]Jiang Yan, Zhang Liguo. Survey of Adversarial Attacks an
d Defense Methods for Deep Learning Model[J]. Computer
Engineering,2021,47(01):1-11.DOI:10.19678/j.issn.1000-342
8.0059156.
姜妍,张立国.面向深度学习模型的对抗攻击与防御方法综述
[J].计算机工程,2021,47(01):1-11.DOI:10.19678/j.issn.1000-34
28.0059156.
【124】Wang Feiyu, Zhang Fan, Du Jiayu. Adversarial Ex
amples Detection Method Based on Image Denoising and
Compressio[J]. Computer Engineering,2023,49(10):230-238.
DOI:10.19678/j.issn.1000-3428.0065638.
王飞宇,张帆,杜加玉,等.基于图像降噪与压缩的对抗样本检
测方法[J].计算机工程,2023,49(10):230-238.DOI:10.19678/j.i
ssn.1000-3428.0065638.
|