1 |
王志波, 王雪, 马菁菁, 等. 面向计算机视觉系统的对抗样本攻击综述. 计算机学报, 2023, 46 (2): 436- 468.
doi: 10.11897/SP.J.1016.2023.00436
|
|
WANG Z B , WANG X , MA J J , et al. Survey on adversarial example attack for computer vision systems. Chinese Journal of Computers, 2023, 46 (2): 436- 468.
doi: 10.11897/SP.J.1016.2023.00436
|
2 |
纪守领, 杜天宇, 李进锋, 等. 机器学习模型安全与隐私研究综述. 软件学报, 2021, 32 (1): 41- 67.
doi: 10.13328/j.cnki.jos.006131
|
|
JI S L , DU T Y , LI J F , et al. Security and privacy of machine learning models: a survey. Journal of Software, 2021, 32 (1): 41- 67.
doi: 10.13328/j.cnki.jos.006131
|
3 |
ATHALYE A, CARLINI N, WAGNER D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2018: 274-283.
URL
|
4 |
LI H C, XU X J, ZHANG X L, et al. QEBA: query-efficient boundary-based blackbox attack[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2020: 1221-1230.
URL
|
5 |
ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square attack: a query-efficient black-box adversarial attack via random search[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 484-501.
URL
|
6 |
PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. New York, USA: ACM Press, 2017: 506-519.
|
7 |
白祉旭, 王衡军. 基于改进遗传算法的对抗样本生成方法. 计算机工程, 2023, 49 (5): 139- 149.
doi: 10.19678/j.issn.1000-3428.0065260
|
|
BAI Z X , WANG H J . Adversarial example generation method based on improved genetic algorithm. Computer Engineering, 2023, 49 (5): 139- 149.
doi: 10.19678/j.issn.1000-3428.0065260
|
8 |
|
9 |
MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2016: 2574-2582.
URL
|
10 |
CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the IEEE Symposium on Security and Privacy. Washington D. C., USA: IEEE Press, 2017: 39-57.
|
11 |
|
12 |
CROCE F, HEIN M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2020: 2206-2216.
URL
|
13 |
LI T, WU Y W, CHEN S Z, et al. Subspace adversarial training[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 13409-13418.
|
14 |
JIA X J, ZHANG Y, WU B Y, et al. LAS-AT: adversarial training with learnable attack strategy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2022: 13398-13408.
URL
|
15 |
|
16 |
|
17 |
|
18 |
KIANI S, AWAN S N, LAN C, et al. Two Souls in an Adversarial Image: towards Universal Adversarial Example Detection using Multi-view Inconsistency[C]//Proceedings of the Annual Computer Security Applications Conference. New York, USA: ACM Press, 2021: 31-44.
URL
|
19 |
SAMANGOUEI P, KABKAB M, CHELLAPPA R. Defense-GAN: protecting classifiers against adversarial attacks using generative models[EB/OL]. [2023-05-20]. https://arxiv.org/abs/1805.06605.
|
20 |
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2014: 2672-2680.
|
21 |
SONG Y, KIM T, NOWOZIN S, et al. PixelDefend: leveraging generative models to understand and defend against adversarial examples[EB/OL]. [2023-05-20]. https://arxiv.org/abs/1710.10766v3.
|
22 |
SALIMANS T, KARPATHY A, CHEN X, et al. PixelCNN++: improving the PixelCNN with discretized logistic mixture likelihood and other modifications[C]//Proceedings of International Conference on Learning Representations. Berlin, Germany: Springer, 2016: 1-10.
URL
|
23 |
|
24 |
SHI C, HOLTZ C, MISHNE G. Online adversarial purification based on self-supervised learning[C]//Proceedings of International Conference on Learning Representations. Berlin, Germany: Springer, 2020: 1-10.
URL
|
25 |
YOON J, HWANG S J, LEE J. Adversarial purification with score-based generative models[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2021: 12062-12072.
URL
|
26 |
SINHA A , DASH S P , PUHAN N B . NOMARO: defending against adversarial attacks by NOMA-inspired reconstruction operation. IEEE Sensors Letters, 2022, 6 (1): 1- 4.
doi: 10.1109/LSENS.2021.3135433
|
27 |
LI J, LI D, XIONG C, et al. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation[C]//Proceedings of International Conference on Machine Learning. [S. l. ]: PMLR, 2022: 12888-12900.
URL
|
28 |
HO J , JAIN A , ABBEEL P . Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 2020, 33, 6840- 6851.
URL
|
29 |
|
30 |
DHARIWAL P , NICHOL A . Diffusion models beat GANs on image synthesis. Advances in Neural Information Processing Systems, 2021, 34, 8780- 8794.
URL
|
31 |
SAHARIA C , HO J , CHAN W , et al. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 1- 14.
URL
|
32 |
SAHARIA C, CHAN W, CHANG H W, et al. Palette: image-to-image diffusion models[C]//Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings. New York, USA: ACM Press, 2022: 1-10.
URL
|