[1] BOJARSKI M, DEL TESTA D, DWORAKOWSKI D, et al. End to end learning for self-driving cars[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1604.07316. [2] PARKHI O M, SIMONYAN K, VEDALDI A, et al. A compact and discriminative face track descriptor[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Columbus, USA:IEEE Computer Society, 2014:1693-1700. [3] DONG Y P, SU H, WU B Y, et al. Efficient decision-based black-box adversarial attacks on face recognition[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1904.04433. [4] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2019-06-20]. https://arxiv.org//abs/1312.6199. [5] GOODFELLOW I J, SHLENS J, SZEGEDY C.Explaining and Harnessin adversarial examples[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1412.6572. [6] CHENG M H, SINGH S, CHEN P, et al. Sign-OPT:a query-efficient hard-label adversarial attack[EB/OL]. [2020-06-20]. https://arxiv.org//abs/1909.10773. [7] CHEN J B, JORDAN M I.Boundary attack++:query-efficient decision-based adversarial attack[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1904.02144. [8] DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with momentum[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach, USA:IEEE Press, 2018:9185-9193. [9] SARKAR S, BANSAL A, MAHBUB U, et al. UPSET and ANGRI:breaking high performance image classifiers[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1707.01159. [10] AKHTAR N, MIAN A.Threat of adversarial attacks on deep learning in computer vision:a survey[J]. IEEE Access, 2018, 6:14410-14430. [11] 张嘉楠, 王逸翔, 刘博, 等. 深度学习的对抗攻击方法综述[J]. 网络空间安全, 2019, 10(7): 87-96. ZHANG J N, WANG Y X, LIU B, et al. Survey of adversarial attacks of deep learning[J]. Information Security and Technology, 2019, 10(7): 87-96.(in Chinese) [12] BRENDEL W, RAUBER J.Decision-based adversarial attacks:reliable attacks against black-box machine learning models[EB/OL]. [2020-06-20]. https://arxiv.org//abs/1712.04248. [13] LIU Y, MOOSAVI-DEZFOOLI S M, FROSSARD P.A geometry-inspired decision-based attack[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision.Seoul, South Korea:IEEE Press, 2019:4889-4897. [14] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1602.02697. [15] TRAMÈR F, PAPERNOT N, GOODFELLOW I, et al. The space of transferable adversarial examples[EB/OL]. [2020-06-20]. https://arxiv.org//abs/1704.03453. [16] KURAKIN A, GOODFELLOW I, BENGIO S.Adversarial examples in the physical world[EB/OL]. [2020-06-20]. https://arxiv.org//abs/1607.02533. [17] XIE C, ZHANG Z, ZHOU Y, et al. Improving transferability of adversarial examples with input diversity[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Long Beach, USA:IEEE Press, 2019:2730-2739. [18] KURAKIN A, GOODFELLOW I, BENGIO S.Adversarial machine learning at scale[EB/OL]. [2020-06-20]. https://arxiv.org//abs/1611.01236. [19] ZHENG T, CHEN C, REN K.Distributionally adversarial attack[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto, USA:AAAI Press, 2019:2253-2260. [20] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. [2020-06-20]. https://arxiv.org/abs/1706.06083. |