[1] Akhtar N, Mian A. Threat of adversarial attacks on deep
learning in computer vision: A survey [J]. IEEE Access,
2018, 6: 14410-14430.
[2] 何英哲, 胡兴波, 何锦雯, 孟国柱, 陈恺. 机器学习系统
的隐私和安全问题综述 [J]. 计算机研究与发展, 2019,
56(10): 2049-2070.
He Yingzhe, Hu Xingbo, He Jinwen, Meng Guozhu, Chen
kai. Privacy and Security Issues in Machine Learning
Systems: A Survey [J]. Journal of Computer Research and
Development, 2019, 56(10): 2049-2070.
[3] Goodfellow I J, Shlens J, Szegedy C. Explaining and
harnessing adversarial examples [J]. arXiv preprint
arXiv:1412.6572, 2014.
[4] Su Jiawei, Vargas D V, Sakurai K. One pixel attack for
fooling deep neural networks [J]. IEEE Transactions on
Evolutionary Computation, 2019, 23(5): 828-841.
[5] Moosavi-Dezfooli S M, Fawzi A, Fawzi O, et al. Universal
adversarial perturbations [C] // Proceedings of the IEEE
conference on computer vision and pattern recognition,
2017: 1765-1773.
[6] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing
properties of neural networks [J]. arXiv preprint
arXiv:1312.6199, 2013.
[7] Xiao Chaowei, Li Bo, Zhu Junyan, He Warren, Liu
Mingyan, Song Dawn. Generating adversarial examples
with adversarial networks [J]. arXiv preprint
arXiv:1801.02610, 2018.
[8] Xu han, Ma Yao, Liu Haochen, et al. Adversarial attacks and
defenses in images, graphs and text: A review [J].
International Journal of Automation and Computing, 2020,
17(2): 151-178.
[9] Dziugaite G K, Ghahramani Z, Roy D M. A study of the
effect of jpg compression on adversarial images [J]. arXivpreprint arXiv:1608.00853, 2016.
[10] Luo Yan, Boix X, Roig G, et al. Foveation-based
mechanisms alleviate adversarial examples [J]. arXiv
preprint arXiv:1511.06292, 2015.
[11] Ross A S, Doshi-Velez F. Improving the adversarial
robustness and interpretability of deep neural networks by
regularizing their input gradients [J]. arXiv preprint
arXiv:1711.09404, 2017.
[12] Li Xiang, Ji Shihao. Defense-vae: A fast and accurate
defense against adversarial attacks [C] // Proceedings of
the Joint European Conference on Machine Learning and
Knowledge Discovery in Databases. Springer, Cham, 2019:
191-207.
[13] Dubey A, Maaten L, Yalniz Z, et al. Defense against
adversarial images using web-scale nearest-neighbor
search[C] // Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2019:
8767-8776.
[14] Liu Changrui, Ye Dengpan, Shang Yueyun, et al. Defend
Against Adversarial Samples by Using Perceptual Hash [J].
CMC-Computers, Materials & Continua, 2020, 62(3),
1365–1386.
[15] Sun Bo, Tsai N, Liu Fangchen, et al. Adversarial defense by
stratified convolutional sparse coding [C] // Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, 2019: 11447-11456.
[16] 吴立人,刘政浩,张浩,岑悦亮,周维 [J]. 计算机应
用, 2020, 40(5):1348-1353
Wu Liren, Liu Zhenghao, Zhang Hao, Cen Yueliang, Zhou
Wei [J]. Journal of Computer Applications, 2020,
40(5):1348-1353
[17] Dong Yinpeng, Liao Fangzhou, Pang Tianyu, Su hang,
Zhu Jun, Hu Xiaolin, Li Jianguo. Boosting Adversarial
Attacks With Momentum [C] // Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition,
2018: 9185-9193.
[18] Yan Xiaodan, Cui Baojiang, Xu Yang, et al. A method of
information protection for collaborative deep learning
under gan model attack [J]. IEEE/ACM Transactions on
Computational Biology and Bioinformatics, 2019:1-12.
[19] Deng Li. The mnist database of handwritten digit images
for machine learning research [J]. IEEE Signal Processing
Magazine, 2012, 29(6): 141-142.
[20] Stallkamp J, Schlipsing M, Salmen J, et al. The German
traffic sign recognition benchmark: a multi-class
classification competition [C] // Proceedings of the IEEE
International joint conference on neural networks, 2011:
1453-1460.
[21] Simonyan K, Zisserman A. Very deep convolutional
networks for large-scale image recognition [J]. arXiv
preprint arXiv:1409.1556, 2014.
|