[1] HERMANN K M, KOISKY' T, GREFENSTETTE E, et al. Teaching machines to read and comprehend[EB/OL].[2023-07-07]. https://arxiv.org/abs/1506.03340. [2] SEO M, KEMBHAVI A, FARHADI A, et al. Bidirectional attention flow for machine comprehension[EB/OL].[2023-07-07]. http://arxiv.org/abs/1611.01603. [3] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of NIPS'17. Cambridge, USA:MIT Press, 2017:5998-6008. [4] DEVLIN J, CHANG M W, LEE K, et al. BERT:pre-training of deep bidirectional Transformers for language understanding[EB/OL].[2023-07-07]. http://arxiv.org/abs/1810.04805. [5] LIU Y, OTT M, GOYAL N, et al. RoBERTa:a robustly optimized BERT pretraining approach[EB/OL].[2023-07-07]. http://arxiv.org/abs/1907.11692. [6] CUI Y, CHE W, LIU T, et al. Revisiting pre-trained models for Chinese natural language processing[EB/OL].[2023-07-07]. http://arxiv.org/abs/2004.13922. [7] YANG Z, DAI Z, YANG Y, et al. XLNet:generalized autoregressive pretraining for language understanding[EB/OL].[2023-07-07]. http://arxiv.org/abs/1906.08237. [8] LAN Z, CHEN M, GOODMAN S, et al. ALBERT:a lite BERT for self-supervised learning of language representations[EB/OL].[2023-07-07]. http://arxiv.org/abs/1909.11942. [9] JOSHI M, CHEN D Q, LIU Y H, et al. SpanBERT:improving pre-training by representing and predicting spans[J]. Transactions of the Association for Computational Linguistics, 2020, 8:64-77. [10] RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD:100, 000+ questions for machine comprehension of text[C]//Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA:Association for Computational Linguistics, 2016:2383-2392. [11] TRISCHLER A, WANG T, YUAN X, et al. NewsQA:a machine comprehension dataset[EB/OL].[2023-07-07]. http://arxiv.org/abs/1611.09830. [12] JOSHI M, CHOI E, WELD D S, et al. TriviaQA:a large scale distantly supervised challenge dataset for reading comprehension[EB/OL].[2023-07-07]. http://arxiv.org/abs/1705.03551. [13] LAI G, XIE Q, LIU H, et al. RACE:large-scale reading comprehension dataset from examinations[EB/OL].[2023-07-07]. http://arxiv.org/abs/1704.04683. [14] RAJPURKAR P, JIA R, LIANG P. Know what you don't know:unanswerable questions for SQuAD[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers). Stroudsburg, USA:Association for Computational Linguistics, 2018:784-789. [15] CUI Y, LIU T, CHEN Z, et al. Dataset for the first evaluation on Chinese machine reading comprehension[EB/OL].[2023-07-07]. http://arxiv.org/abs/1709.08299. [16] CUI Y M, LIU T, CHE W X, et al. A span-extraction dataset for Chinese machine reading comprehension[C]//Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP). Stroudsburg, USA:Association for Computational Linguistics, 2019:5883-5888. [17] SHAO C C, LIU T, LAI Y, et al. DRCD:a Chinese machine reading comprehension dataset[EB/OL].[2023-07-07]. http://arxiv.org/abs/1806.00920. [18] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL].[2023-07-07]. http://arxiv.org/abs/1312.6199. [19] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL].[2023-07-07]. http://arxiv.org/abs/1412.6572. [20] MIYATO T, DAI A M, GOODFELLOW I. Adversarial training methods for semi-supervised text classification[EB/OL].[2023-07-07]. http://arxiv.org/abs/1605.07725. [21] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL].[2023-07-07]. https://arxiv.org/abs/1706.06083. [22] ZHU C, CHENG Y, GAN Z, et al. FreeLB:enhanced adversarial training for natural language understanding[EB/OL].[2023-07-07]. https://arxiv.org/abs/1909.11764. [23] 刘高军, 李亚欣, 段建勇. 基于混合注意力机制的中文机器阅读理解[J]. 计算机工程, 2022, 48(10):67-72, 80. LIU G J, LI Y X, DUAN J Y. Chinese machine reading comprehension based on hybrid attention mechanism[J]. Computer Engineering, 2022, 48(10):67-72, 80.(in Chinese) [24] SUN Z J, LI X Y, SUN X F, et al. ChineseBERT:Chinese pretraining enhanced by glyph and Pinyin information[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:Long Papers). Stroudsburg, USA:Association for Computational Linguistics, 2021:2065-2075. [25] 韩玉蛟,罗智勇,张明明,等.基于话头话体共享结构信息的机器阅读理解研究[C]//第二十一届中国计算语言学大会论文集.[出版地不详]:中国中文信息学会计算语言专业委员会, 2022:634-643. HAN Y J, LUO Z Y, ZHANG M M, et al. Research on machine reading comprehension based on shared structure information between naming and telling[C]//Proceedings of the 21st Chinese National Conference on Computational Linguistics.[S. l.]:Computational Language Professional Committee of the Chinese Information Society, 2022:634-643.(in Chinese) [26] SUN Y, WANG S H, LI Y K, et al. ERNIE2.0:a continual pre-training framework for language understanding[EB/OL].[2023-07-07]. https://arxiv.org/abs/1907.12412. [27] WANG J W, ZHAO H, ZHAO Y G, et al. What if sentence-hood is hard to define:a case study in Chinese reading comprehension[C]//Proceedings of the Findings of the Association for Computational Linguistics:EMNLP 2021. Stroudsburg, USA:Association for Computational Linguistics, 2021:2348-2359. [28] CHEN B, TANG H, WANG J, et al. CLOWER:a pre-trained language model with contrastive learning over word and character representations[EB/OL].[2023-07-07]. http://arxiv.org/abs/2208.10844. [29] CHEN D Q, FISCH A, WESTON J, et al. Reading Wikipedia to answer open-domain questions[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). Stroudsburg, USA:Association for Computational Linguistics, 2017:1870-1879. |