[1] SADAK F, SAADAT M, HAJIYAVAND A M.Real-time deep learning-based image recognition for applications in automated positioning and injection of biological cells[J].Computers in Biology and Medicine, 2020, 125(10):103976. [2] GUO G D, ZHANG N.A survey on deep learning based face recognition[J].Computer Vision and Image Understanding, 2019, 189:102805. [3] MOPURI K R, GANESHAN A, BABU R V.Generalizable data-free objective for crafting universal adversarial perturbations[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(10):2452-2465. [4] 陈晓楠, 胡建敏, 张本俊, 等.基于模型间迁移性的黑盒对抗攻击起点提升方法[J].计算机工程, 2021, 47(8):162-169. CHEN X N, HU J M, ZHANG B J, et al.Black box attack adversarial starting point promotion method based on mobility between models[J].Computer Engineering, 2021, 47(8):162-169.(in Chinese) [5] PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al.Practical black-box attacks against machine learning[C]//Proceedings of ACM on Asia Conference on Computer and Communications Security.New York, USA:ACM Press, 2017:506-519. [6] SHARIF M, BHAGAVATULA S, BAUER L, et al.Accessorize to a crime:real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of ACM SIGSAC Conference on Computer and Communications Security.New York, USA:ACM Press, 2016:1528-1540. [7] EYKHOLT K, EVTIMOV I, FERNANDES E, et al.Robust physical-world attacks on deep learning model[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1-10. [8] 姜妍, 张立国.面向深度学习模型的对抗攻击与防御方法综述[J].计算机工程, 2021, 47(1):1-11. JIANG Y, ZHANG L G.Survey of adversarial attacks and defense methods for deep learning model[J].Computer Engineering, 2021, 47(1):1-11.(in Chinese) [9] LIU Y P, CHEN X Y, LIU C, et al.Delving into transferable adversarial examples and black-box attacks[EB/OL].[2021-09-20]:https://arxiv.org/abs/1611.02770. [10] DONG Y P, LIAO F Z, PANG T Y, et al.Boosting adversarial attacks with momentum[EB/OL].[2021-09-20].https://arxiv.org/pdf/1710.06081v2.pdf. [11] WANG X S, HE X R, WANG J D, et al.Admix:enhancing the transferability of adversarial attacks[EB/OL].[2021-09-20].http://arxiv.org/abs/2102.00436V3. [12] KURAKIN A, GOODFELLOW I, BENGIO S.Adversarial examples in the physical world[EB/OL].[2021-09-20].https://arxiv.org/abs/1607.02533v4. [13] BIGGIO B, CORONA I, MAIORCA D, et al.Evasion attacks against machine learning at test time[C]//Proceedings of European Conference on Machine Learning and Knowledge Discovery in Databases.New York, USA:ACM Press, 2013:387-402. [14] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al.Intriguing properties of neural networks[C]//Proceedings of International Conference on Learning Representations.Banff, Canada:[s.n.], 2014:1-10. [15] GOODFELLOW I J, SHLENS J, SZEGEDY C.Explaining and harnessing adversarial examples[EB/OL].[2021-09-20].https://arxiv.org/pdf/1412.6572.pdf. [16] EYKHOLT K, EVTIMOV I, FERNANDES E, et al.Robust physical-world attacks on deep learning visual classification[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:1625-1634. [17] DONG Y P, PANG T Y, SU H, et al.Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:4312-4321. [18] KURAKIN A, GOODFELLOW I J, SAMY B.Adversarial machine learning at scale[EB/OL].[2021-09-20].https://arxiv.org/pdf/1611.01236.pdf. [19] TRAMER F, KURAKIN A, PAPERNOT N, et al.Ensemble adversarial training:attacks and defenses[EB/OL].[2021-09-20].https://arxiv.org/abs/1705.07204v5. [20] LIU G X, KHALIL I, KHREISHAH A.Using single-step adversarial training to defend iterative adversarial examples[C]//Proceedings of the 7th ACM Conference on Data and Application Security and Privacy.New York, USA:ACM Press, 2021:17-27. [21] XIE C H, WU Y X, MAATEN L V D, et al.Feature denoising for improving adversarial robustness[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:1-10. [22] METZEN J H, GENEWEIN T, FISCHER V, et al.On detecting adversarial perturbations[EB/OL].[2021-09-20].https://arxiv.org/abs/1702.04267v1. [23] RUSSAKOVSKY O, DENG J, SU H, et al.ImageNet large scale visual recognition challenge[J].International Journal of Computer Vision, 2015, 115(3):211-252. [24] SZEGEDY C, SHNATHON J, IOFFE S, et al.Rethinking the inception architecture for computer vision[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:2818-2826. [25] SZEGEDY C, IOFFE S, VANHOUCKE V, et al.Inception-v4, inception-ResNet and the impact of residual connections on learning[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence.San Francisco, USA:AAAI Press, 2017:4278-4284. [26] HE K M, ZHANG X Y, REN S Q, et al.Identity mappings in deep residual networks[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2016:630-645. |