1 |
LECUN Y , BENGIO Y , HINTON G E . Deep learning. Nature, 2015, 521 (7553): 436- 444.
doi: 10.1038/nature14539
|
2 |
OTTER D W , MEDINA J R , KALITA J K . A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32 (2): 604- 624.
doi: 10.1109/TNNLS.2020.2979670
|
3 |
BOU NASSIF A , SHAHIN I , ATTILI I , et al. Speech recognition using deep neural networks: a systematic review. IEEE Access, 2019, 7, 19143- 19165.
doi: 10.1109/ACCESS.2019.2896880
|
4 |
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60 (6): 84- 90.
doi: 10.1145/3065386
|
5 |
EYKHOLT K, EVTIMOV I, FERNANDES E, et al. Robust physical-world attacks on deep learning visual classification[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2018: 1625-1634.
|
6 |
HAO S Y, HAO G. Research on OCT image processing based on deep learning[C]//Proceedings of the 10th International Conference on Electronics Information and Emergency Communication. Washington D.C., USA: IEEE Press, 2020: 208-212.
|
7 |
陈宇飞, 沈超, 王骞, 等. 人工智能系统安全与隐私风险. 计算机研究与发展, 2019, 56 (10): 2135- 2150.
doi: 10.7544/issn1000-1239.2019.20190415
|
|
CHEN Y F , SHEN C , WANG Q , et al. Security and privacy risks in artificial intelligence systems. Journal of Computer Research and Development, 2019, 56 (10): 2135- 2150.
doi: 10.7544/issn1000-1239.2019.20190415
|
8 |
纪守领, 杜天宇, 李进锋, 等. 机器学习模型安全与隐私研究综述. 软件学报, 2021, 32 (1): 41- 67.
URL
|
|
JI S L , DU T Y , LI J F , et al. Security and privacy of machine learning models: a survey. Journal of Software, 2021, 32 (1): 41- 67.
URL
|
9 |
|
10 |
AKHTAR N , MIAN A . Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access, 2018, 6, 14410- 14430.
doi: 10.1109/ACCESS.2018.2807385
|
11 |
段广晗, 马春光, 宋蕾, 等. 深度学习中对抗样本的构造及防御研究. 网络与信息安全学报, 2020, 6 (2): 1- 11.
URL
|
|
DUAN G H , MA C G , SONG L , et al. Research on the construction and defense of antagonistic samples in deep learning. Chinese Journal of Network and Information Security, 2020, 6 (2): 1- 11.
URL
|
12 |
|
13 |
LIU J Y, ZHANG W M, ZHANG Y W, et al. Detection based defense against adversarial examples from the steganalysis point of view[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2020: 4820-4829.
|
14 |
COHEN G, SAPIRO G, GIRYES R. Detecting adversarial samples using influence functions and nearest neighbors[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2020: 14441-14450.
|
15 |
潘文雯, 王新宇, 宋明黎, 等. 对抗样本生成技术综述. 软件学报, 2020, 31 (1): 67- 81.
URL
|
|
PAN W W , WANG X Y , SONG M L , et al. Survey on generating adversarial examples. Journal of Software, 2020, 31 (1): 67- 81.
URL
|
16 |
张思思, 左信, 刘建伟. 深度学习中的对抗样本问题. 计算机学报, 2019, 42 (8): 1886- 1904.
URL
|
|
ZHANG S S , ZUO X , LIU J W . The problem of the adversarial examples in deep learning. Chinese Journal of Computers, 2019, 42 (8): 1886- 1904.
URL
|
17 |
|
18 |
|
19 |
CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of Symposium on Security and Privacy. Washington D.C., USA: IEEE Press, 2017: 39-57.
|
20 |
MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2016: 2574-2582.
|
21 |
PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//Proceedings of European Symposium on Security and Privacy. Berlin, Germany: Springer, 2016: 372-387.
|
22 |
PAPERNOT N, MCDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]//Proceedings of Symposium on Security and Privacy. Washington D.C., USA: IEEE Press, 2016: 582-597.
|
23 |
PRAKASH A, MORAN N, GARBER S, et al. Deflecting adversarial attacks with pixel deflection[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D.C., USA: IEEE Press, 2018: 8571-8580.
|
24 |
|
25 |
ZUO F, ZENG Q. Exploiting the sensitivity of L2 adversarial examples to erase-and-restore[C]//Proceedings of ACM Asia Conference on Computer and Communications Security. New York, USA: ACM Press, 2021: 40-51.
|
26 |
XU W L, EVANS D, QI Y J. Feature squeezing: detecting adversarial examples in deep neural networks[C]//Proceedings of Network and Distributed System Security Symposium. Washington D.C., USA: IEEE Press, 2018: 1-10.
|
27 |
MASCI J, MEIER U, CIREŞAN D, et al. Stacked convolutional auto-encoders for hierarchical feature extraction[C]//Proceedings of International Conference on Artificial Neural Networks. Berlin, Germany: Springer, 2011: 52-59.
|
28 |
VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders[C]//Proceedings of the 25th International Conference on Machine Learning. New York, USA: ACM Press, 2008: 1096-1103.
|
29 |
|
30 |
|
31 |
MENG D Y, CHEN H. MagNet: a two-pronged defense against adversarial examples[C]//Proceedings of ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM Press, 2017: 135-147.
|