1 |
|
2 |
SAMANTA S, MEHTA S. Generating adversarial text samples[M]. Berlin, Germany: Springer, 2018.
|
3 |
|
4 |
GAO J, LANCHANTIN J, SOFFA M L, et al. Black-box generation of adversarial text sequences to evade deep learning classifiers[C]//Proceedings of IEEE Security and Privacy Workshops. Washington D. C., USA: IEEE Press, 2018: 50-56.
|
5 |
YUAN L P, ZHENG X Q, ZHOU Y, et al. On the transferability of adversarial attacks against neural text classifier[EB/OL]. [2022-11-03]. https://arxiv.org/abs/2011.08558.
|
6 |
|
7 |
EGER S, ŞAHIN G G, RÜCKLÉ A, et al. Text processing like humans do: visually attacking and shielding NLP systems[EB/OL]. [2022-11-03]. https://arxiv.org/abs/1903.11508.
|
8 |
PAPERNOT N, MCDANIEL P, SWAMI A, et al. Crafting adversarial input sequences for recurrent neural networks[C]//Proceedings of 2016 IEEE Military Communications Conference. Washington D. C., USA: IEEE Press, 2016: 49-54.
|
9 |
|
10 |
REN S H, DENG Y H, HE K, et al. Generating natural language adversarial examples through probability weighted word saliency[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2019: 1085-1097.
|
11 |
JIN D, JIN Z J, ZHOU J T, et al. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment[C]//Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2020: 8018-8025.
|
12 |
GARG S, RAMAKRISHNAN G. BAE: BERT-based adversarial examples for text classification[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2020: 6174-6181.
|
13 |
|
14 |
|
15 |
IYYER M, WIETING J, GIMPEL K, et al. Adversarial example generation with syntactically controlled paraphrase networks[EB/OL]. [2022-11-03]. https://arxiv.org/abs/1804.06059.
|
16 |
RIBEIRO M T, SINGH S, GUESTRIN C. Semantically equivalent adversarial rules for debugging NLP models[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2018: 1-10.
|
17 |
|
18 |
|
19 |
VIJAYARAGHAVAN P, ROY D. Generating black-box adversarial examples for text classifiers using a deep reinforced model[M]. Berlin, Germany: Springer, 2020.
|
20 |
|
21 |
LI L Y, SHAO Y F, SONG D M, et al. Generating adversarial examples in Chinese texts using sentence-pieces[EB/OL]. [2022-11-03]. https://arxiv.org/abs/2012.14769.
|
22 |
王文琦, 汪润, 王丽娜, 等. 面向中文文本倾向性分类的对抗样本生成方法. 软件学报, 2019, 30 (8): 2415- 2427.
URL
|
|
WANG W Q, WANG R, WANG L N, et al. Adversarial examples generation approach for tendency classification on Chinese texts. Journal of Software, 2019, 30 (8): 2415- 2427.
URL
|
23 |
WANG X A, LIU Q, GUI T, et al. TextFlint: unified multilingual robustness evaluation toolkit for natural language processing[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. Stroudsburg, USA: Association for Computational Linguistics, 2021: 347-355.
|
24 |
叶静. 汉字的顺序不一定影响阅读. 重庆文理学院学报(社会科学版), 2014, 33 (6): 77- 81.
URL
|
|
YE J. The influence of Chinese word order on the reading. Journal of Chongqing University of Arts and Sciences(Social Sciences Edition), 2014, 33 (6): 77- 81.
URL
|
25 |
SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2017: 4080-4090.
|
26 |
|