| 1 |
GU T Y, DOLAN G B, GARG S. BadNets: identifying vulnerabilities in the machine learning model supply chain[EB/OL]. [2024-05-05]. https://arxiv.org/abs/1708.06733.
|
| 2 |
HE Y , SHEN Z L , XIA C , et al. SGBA: a stealthy scapegoat backdoor attack against deep neural networks. Computers & Security, 2024, 136, 103523.
|
| 3 |
LI H L, WANG Y F, XIE X F, et al. Light can hack your face! black-box backdoor attack on face recognition systems[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2009.06996.
|
| 4 |
LIN J Y, XU L, LIU Y Q, et al. Composite backdoor attack for deep neural network by mixing existing benign features[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM Press, 2020: 113-131.
|
| 5 |
|
| 6 |
LI S F , XUE M H , ZHAO B Z H , et al. Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Transactions on Dependable and Secure Computing, 2021, 18 (5): 2088- 2105.
|
| 7 |
LI Y Z, LI Y M, WU B Y, et al. Invisible backdoor attack with sample-specific triggers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2022: 16443-16452.
|
| 8 |
|
| 9 |
SARKAR E , BENKRAOUDA H , KRISHNAN G , et al. FaceHack: attacking facial recognition systems using malicious facial characteristics. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2022, 4 (3): 361- 372.
doi: 10.1109/TBIOM.2021.3132132
|
| 10 |
GAO K F , BAI J W , WU B Y , et al. Imperceptible and robust backdoor attack in 3D point cloud. IEEE Transactions on Information Forensics and Security, 2024, 19, 1267- 1282.
doi: 10.1109/TIFS.2023.3333687
|
| 11 |
ZHANG J , CHEN D D , HUANG Q D , et al. Poison Ink: robust and invisible backdoor attack. IEEE Transactions on Image Processing, 2022, 31, 5691- 5705.
doi: 10.1109/TIP.2022.3201472
|
| 12 |
SALEM A, WEN R, BACKES M, et al. Dynamic backdoor attacks against machine learning models[C]//Proceedings of the 7th IEEE European Symposium on Security and Privacy (EuroS&P). Washington D.C., USA: IEEE Press, 2022: 703-718.
|
| 13 |
LIU Y F, MA X J, BAILEY J, et al. Reflection backdoor: a natural backdoor attack on deep neural networks[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2007.02343.
|
| 14 |
YAN Z C , LI G L , TIAN Y , et al. DeHiB: deep hidden backdoor attack on semi-supervised learning via adversarial perturbation. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35 (12): 10585- 10593.
doi: 10.1609/aaai.v35i12.17266
|
| 15 |
BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNS by training set corruption without label poisoning[C]//Proceedings of the IEEE International Conference on Image Processing (ICIP). Washington D.C., USA: IEEE Press, 2019: 101-105.
|
| 16 |
WENGER E, PASSANANTI J, BHAGOJI A N, et al. Backdoor attacks against deep learning systems in the physical world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2021: 6202-6211.
|
| 17 |
XUE M F , HE C , WU Y H , et al. PTB: robust physical backdoor attacks against deep neural networks in real world. Computers & Security, 2022, 118, 102726.
|
| 18 |
HAN X S, XU G W, ZHOU Y, et al. Physical backdoor attacks to lane detection systems in autonomous driving[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York, USA: ACM Press, 2022: 2957-2968.
|
| 19 |
SHAFAHI A, HUANG W R, NAJIBI M, et al. Poison frogs! targeted clean-label poisoning attacks on neural networks[EB/OL]. [2024-05-05]. https://arxiv.org/abs/1804.00792.
|
| 20 |
CHENG S Y , LIU Y Q , MA S Q , et al. Deep feature space Trojan attack of neural networks by controlled detoxification. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35 (2): 1148- 1156.
doi: 10.1609/aaai.v35i2.16201
|
| 21 |
|
| 22 |
SAHA A , SUBRAMANYA A , PIRSIAVASH H . Hidden trigger backdoor attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34 (7): 11957- 11965.
doi: 10.1609/aaai.v34i07.6871
|
| 23 |
NING R, LI J, XIN C S, et al. Invisible poison: a blackbox clean label backdoor attack to deep neural networks[C]//Proceedings of the IEEE Conference on Computer Communications. Washington D.C., USA: IEEE Press, 2021: 1-10.
|
| 24 |
|
| 25 |
SALEM A, BACKES M, ZHANG Y. Don't trigger me! A triggerless backdoor attack against deep neural networks[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2010.03282.
|
| 26 |
TANG R X, DU M N, LIU N H, et al. An embarrassingly simple approach for Trojan attack in deep neural networks[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York, USA: ACM Press, 2020: 218-228.
|
| 27 |
LI Y C, HUA J Y, WANG H Y, et al. DeepPayload: black-box backdoor attack on deep learning models through neural payload injection[C]//Proceedings of the 43rd IEEE/ACM International Conference on Software Engineering (ICSE). New York, USA: ACM Press, 2021: 263-274.
|
| 28 |
DUMFORD J, SCHEIRER W. Backdooring convolutional neural networks via targeted weight perturbations[C]//Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). Washington D.C., USA: IEEE Press, 2021: 1-9.
|
| 29 |
COSTALES R, MAO C Z, NORWITZ R, et al. Live Trojan attacks on deep neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Washington D.C., USA: IEEE Press, 2020: 3460-3469.
|
| 30 |
QI X Y, ZHU J F, XIE C L, et al. Subnet replacement: deployment-stage backdoor attack against deep neural networks in gray-box setting[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2107.07240.
|
| 31 |
GARG S, KUMAR A, GOEL V, et al. Can adversarial weight perturbations inject neural backdoors[C]//Proceedings of the 29th ACM International Conference on Information & Knowledge Management. New York, USA: ACM Press, 2020: 2029-2032.
|
| 32 |
HONG S, CARLINI N, KURAKIN A. Handcrafted backdoors in deep neural networks[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2022: 8068-8080.
|
| 33 |
RAKIN A S, HE Z Z, FAN D L. TBT: targeted neural network attack with bit Trojan[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2020: 13195-13204.
|
| 34 |
CHEN H L, FU C, ZHAO J S, et al. ProFlip: targeted Trojan attack with progressive bit flips[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2022: 7698-7707.
|
| 35 |
|
| 36 |
GAO Y S, DOAN B G, ZHANG Z, et al. Backdoor attacks and countermeasures on deep learning: a comprehensive review[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2007.10760.
|
| 37 |
XU J, KOFFAS S, ERSOY O, et al. Watermarking graph neural networks based on backdoor attacks[C]//Proceedings of the 8th IEEE European Symposium on Security and Privacy (EuroS&P). Washington D.C., USA: IEEE Press, 2023: 1179-1197.
|
| 38 |
|
| 39 |
|
| 40 |
SUN Z S, DU X N, SONG F, et al. CoProtector: protect open-source code against unauthorized training usage with data poisoning[C]//Proceedings of the ACM Web Conference 2022. New York, USA: ACM Press, 2022: 652-660.
|
| 41 |
GONG X L , CHEN Y J , WANG Q , et al. Defense-resistant backdoor attacks against deep neural networks in outsourced cloud environment. IEEE Journal on Selected Areas in Communications, 2021, 39 (8): 2617- 2631.
doi: 10.1109/JSAC.2021.3087237
|
| 42 |
|
| 43 |
XIANG Z, MILLER D J, CHEN S H, et al. A backdoor attack against 3D point cloud classifiers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2022: 7577-7587.
|
| 44 |
|
| 45 |
TANCIK M, MILDENHALL B, NG R. StegaStamp: invisible hyperlinks in physical photographs[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2020: 2114-2123.
|
| 46 |
|
| 47 |
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2018: 586-595.
|
| 48 |
WANG Y L , ZHAO M H , LI S H , et al. Dispersed pixel perturbation-based imperceptible backdoor trigger for image classifier models. IEEE Transactions on Information Forensics and Security, 2022, 17, 3091- 3106.
doi: 10.1109/TIFS.2022.3202687
|
| 49 |
DUAN R J, MA X J, WANG Y S, et al. Adversarial camouflage: hiding physical-world attacks with natural styles[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2020: 997-1005.
|
| 50 |
CHANG C C , HSIAO J Y , CHAN C S . Finding optimal least-significant-bit substitution in image hiding by dynamic programming strategy. Pattern Recognition, 2003, 36 (7): 1583- 1595.
doi: 10.1016/S0031-3203(02)00289-3
|
| 51 |
DENG L . The MNIST database of handwritten digit images for machine learning research best of the web. IEEE Signal Processing Magazine, 2012, 29 (6): 141- 142.
doi: 10.1109/MSP.2012.2211477
|
| 52 |
|
| 53 |
|
| 54 |
SOURI H, FOWL L, CHELLAPPA R, et al. Sleeper agent: scalable hidden trigger backdoors for neural networks trained from scratch[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2106.08970.
|
| 55 |
DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2009: 248-255.
|
| 56 |
PARKHI O M, VEDALDI A, ZISSERMAN A. Deep face recognition[C]//Proceedings of the British Machine Vision Conference 2015. Washington D.C., USA: IEEE Press, 2015: 1-12.
|
| 57 |
CAO Q, SHEN L, XIE W D, et al. VGGFace2: a dataset for recognising faces across pose and age[C]//Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). Washington D.C., USA: IEEE Press, 2018: 67-74.
|
| 58 |
|
| 59 |
LIU Z W, LUO P, WANG X G, et al. Deep learning face attributes in the wild[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV). Washington D.C., USA: IEEE Press, 2016: 3730-3738.
|
| 60 |
SUN Y, WANG X G, TANG X O. Deep learning face representation from predicting 10, 000 classes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2014: 1891-1898.
|
| 61 |
WOLF L, HASSNER T, MAOZ I. Face recognition in unconstrained videos with matched background similarity[C]//Proceedings of the CVPR 2011. Washington D.C., USA: IEEE Press, 2011: 529-534.
|
| 62 |
KUMAR N, BERG A C, BELHUMEUR P N, et al. Attribute and simile classifiers for face verification[C]//Proceedings of the 12th IEEE International Conference on Computer Vision. Washington D.C., USA: IEEE Press, 2010: 365-372.
|
| 63 |
ZHANG N, PALURI M, TAIGMAN Y, et al. Beyond frontal faces: improving person recognition using multiple cues[EB/OL]. [2024-05-05]. https://arxiv.org/abs/1501.05703.
|
| 64 |
STALLKAMP J , SCHLIPSING M , SALMEN J , et al. Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 2012, 32, 323- 332.
doi: 10.1016/j.neunet.2012.02.016
|
| 65 |
|
| 66 |
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2016: 770-778.
|
| 67 |
HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2017: 2261-2269.
|
| 68 |
DÉSIDÉRI J A . Multiple-Gradient Descent Algorithm (MGDA) for multiobjective optimization. Comptes Rendus Mathematique, 2012, 350 (5/6): 313- 318.
|
| 69 |
RECTOR B J , WANG J K , MOZAFARI B . Revisiting projection-free optimization for strongly convex constraint sets. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33 (1): 1576- 1583.
doi: 10.1609/aaai.v33i01.33011576
|
| 70 |
WANG B L, YAO Y S, SHAN S, et al. Neural cleanse: identifying and mitigating backdoor attacks in neural networks[C]//Proceedings of the IEEE Symposium on Security and Privacy (SP). Washington D.C., USA: IEEE Press, 2019: 707-723.
|
| 71 |
|
| 72 |
|
| 73 |
CHOU E, TRAMÈR F, PELLEGRINO G. SentiNet: detecting localized universal attacks against deep learning systems[C]//Proceedings of the IEEE Security and Privacy Workshops (SPW). Washington D.C., USA: IEEE Press, 2020: 48-54.
|
| 74 |
WANG R, ZHANG G Y, LIU S J, et al. Practical detection of Trojan neural networks: data-limited and data-free cases[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2007.15802.
|
| 75 |
TRAN B, LI J, MADRY A. Spectral signatures in backdoor attacks[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2018: 8011-8021.
|
| 76 |
CHEN B, CARVALHO W, BARACALDO N, et al. Detecting backdoor attacks on deep neural networks by activation clustering[EB/OL]. [2024-05-05]. https://arxiv.org/abs/1811.03728.
|
| 77 |
LIU Y Q, LEE W C, TAO G H, et al. ABS: scanning neural networks for back-doors by artificial brain stimulation[C]//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM Press, 2019: 1265-1282.
|
| 78 |
VELDANDA A K, LIU K, TAN B, et al. NNoculation: catching BadNets in the wild[C]//Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security. New York, USA: ACM Press, 2021: 49-60.
|
| 79 |
|
| 80 |
SELVARAJU R R , COGSWELL M , DAS A , et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 2020, 128 (2): 336- 359.
doi: 10.1007/s11263-019-01228-7
|
| 81 |
KOLOURI S, SAHA A, PIRSIAVASH H, et al. Universal litmus patterns: revealing backdoor attacks in CNNs[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C., USA: IEEE Press, 2020: 298-307.
|
| 82 |
HUANG X J, ALZANTOT M, SRIVASTAVA M. NeuronInspect: detecting backdoors in neural networks via output explanations[EB/OL]. [2024-05-05]. https://arxiv.org/abs/1911.07399.
|
| 83 |
ADI Y, BAUM C, CISSE M, et al. Turning your weakness into a strength: watermarking deep neural networks by backdooring[C]//Proceedings of the 27th USENIX Conference on Security Symposium. San Diego, USA: USENIX Association, 2018: 1615-1631.
|
| 84 |
BARNI M, PÉREZ G F, TONDI B. DNN watermarking: four challenges and a funeral[C]//Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security. New York, USA: ACM Press, 2021: 189-196.
|
| 85 |
LI Z, HU C Y, ZHANG Y, et al. How to prove your model belongs to you: a blind-watermark based framework to protect intellectual property of DNN[C]//Proceedings of the 35th Annual Computer Security Applications Conference. New York, USA: ACM Press, 2019: 126-137.
|
| 86 |
LIU G Y , XU T L , MA X Q , et al. Your model trains on my data? protecting intellectual property of training data via membership fingerprint authentication. IEEE Transactions on Information Forensics and Security, 2022, 17, 1024- 1037.
doi: 10.1109/TIFS.2022.3155921
|
| 87 |
|
| 88 |
LE T, PARK N, LEE D. A sweet rabbit hole by DARCY: using honeypots to detect universal trigger's adversarial attacks[EB/OL]. [2024-05-05]. https://arxiv.org/abs/2011.10492.
|
| 89 |
SHAN S, WENGER E, WANG B L, et al. Gotta Catch'Em all: using honeypots to catch adversarial attacks on neural networks[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM Press, 2020: 67-83.
|