| 1 |  | 
																													
																							| 2 |  | 
																													
																							| 3 |  | 
																													
																							| 4 | WANG S, GONG Y X. Adversarial example detection based on saliency map features. Applied Intelligence, 2022, 52 (6): 6262- 6275.  doi: 10.1007/s10489-021-02759-8
 | 
																													
																							| 5 |  | 
																													
																							| 6 |  | 
																													
																							| 7 | DANSKIN J M. The theory of max-min and its application to weapons allocation problems. Berlin, Germany: Springer, 1967. | 
																													
																							| 8 | DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2018: 9185-9193. | 
																													
																							| 9 |  | 
																													
																							| 10 |  | 
																													
																							| 11 | PAPERNOT N, MCDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]//Proceedings of IEEE Symposium on Security and Privacy. Washington D. C., USA: IEEE Press, 2016: 582-597. | 
																													
																							| 12 | CARLINI N, WAGNER D. Adversarial examples are not easily detected: bypassing ten detection methods[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York, USA: ACM Press, 2017: 3-14. | 
																													
																							| 13 | MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2016: 2574-2582. | 
																													
																							| 14 |  | 
																													
																							| 15 | MENG D Y, CHEN H. MagNet: a two-pronged defense against adversarial examples[C]//Proceedings of 2017 ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM Press, 2017: 135-147. | 
																													
																							| 16 | KURAKIN A, GOODFELLOW I, BENGIO S, et al. Adversarial attacks and defences competition. Berlin, Germany: Springer, 2018. | 
																													
																							| 17 | HAARNOJA T, ZHOU A, ABBEEL P, et al. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1801.01290 . | 
																													
																							| 18 |  | 
																													
																							| 19 | XIE C H, ZHANG Z S, ZHOU Y Y, et al. Improving transferability of adversarial examples with input diversity[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2020: 2725-2734. | 
																													
																							| 20 | PAPERNOT N, MCDANIEL P, GOODFELLOW I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1605.07277 . | 
																													
																							| 21 |  | 
																													
																							| 22 |  | 
																													
																							| 23 |  | 
																													
																							| 24 |  | 
																													
																							| 25 | ZHANG H C, WANG J Y. Defense against adversarial attacks using feature scattering-based adversarial training[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1907.10764 . | 
																													
																							| 26 |  | 
																													
																							| 27 | LOUIZOS C, WELLING M. Structured and efficient variational deep learning with matrix Gaussian posteriors[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1603.04733 . | 
																													
																							| 28 |  | 
																													
																							| 29 | 徐彦. 基于梯度下降的脉冲神经元在线学习方法. 计算机工程, 2015, 41 (12): 150-155, 160.  URL
 | 
																													
																							|  | XU Y. Spiking neuron online learning method based on gradient descent. Computer Engineering, 2015, 41 (12): 150-155, 160.  URL
 | 
																													
																							| 30 | YE M, GONG C Y, LIU Q. SAFER: a structure-free approach for certified robustness to adversarial word substitutions[EB/OL]. [2021-12-20]. https://arxiv.org/abs/2005.14424 . | 
																													
																							| 31 | 陈健, 唐俊遥, 朱生光, 等. 深度堆栈自编码网络在船舶重量估算中的应用. 计算机工程, 2019, 45 (5): 315- 320.  URL
 | 
																													
																							|  | CHEN J, TANG J Y, ZHU S G, et al. Application of deep stack autoencoder network in ship weight estimation. Computer Engineering, 2019, 45 (5): 315- 320.  URL
 | 
																													
																							| 32 | REN S H, DENG Y H, HE K, et al. Generating natural language adversarial examples through probability weighted word saliency[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Washington D. C., USA: IEEE Press, 2019: 1581-1595. | 
																													
																							| 33 | WALLACE E, RODRIGUEZ P, FENG S, et al. Trick me if you can: human-in-the-loop generation of adversarial examples for question answering. Transactions of the Association for Computational Linguistics, 2019, 7, 387- 401.  doi: 10.1162/tacl_a_00279
 | 
																													
																							| 34 | MICHEL P, LI X A, NEUBIG G, et al. On evaluation of adversarial perturbations for sequence-to-sequence models[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1903.06620 . | 
																													
																							| 35 | HUANG P S, STANFORTH R, WELBL J, et al. Achieving verified robustness to symbol substitutions via interval bound propagation[EB/OL]. [2021-12-20]. https://arxiv.org/abs/1909.01492 . | 
																													
																							| 36 | PENNINGTON J, SOCHER R, MANNING C. GLOVE: global vectors for word representation[C]//Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. Washington D. C., USA: IEEE Press, 2014: 1532-1543. | 
																													
																							| 37 |  | 
																													
																							| 38 | JIN D, JIN Z J, ZHOU J T, et al. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment[C]//Proceedings of AAAI Conference on Artificial Intelligence. [S. 1. ]: AAAI Press, 2020: 8018-8025. |