[1] CHEN D Q.Neural reading comprehension and beyond[D].PaloAlto, USA:Stanford University, 2018. [2] DEVLIN J, CHANG M W, LEE K, et al.BERT:pre-training of deep bidirectional transformers for language understanding[EB/OL].[2021-06-20].https://arxiv.org/abs/1810.04805. [3] LAN Z Z, CHEN M D, GOODMAN S, et al.ALBERT:a lite BERT for self-supervised learning of language representations[EB/OL].[2021-06-20].https://arxiv.org/abs/1909.11942. [4] LIU Y H, OTT M, GOYAL N, et al.RoBERTa:a robustly optimized BERT pretraining approach[EB/OL].[2021-06-20].https://arxiv.org/abs/1907.11692. [5] BAHDANAU D, CHO K, BENGIO Y.Neural machine translation by jointly learning to align and translate[EB/OL].[2021-06-20].https://arxiv.org/abs/1409.0473. [6] HERMANN K M, KOČISKÝ T, GREFENSTETTE E, et al.Teaching machines to read and comprehend[EB/OL].[2021-06-20].https://arxiv.org/abs/1506.03340. [7] KADLEC R, SCHMID M, BAJGAR O, et al.Text understanding with the attention sum reader network[EB/OL].[2021-06-20].https://arxiv.org/abs/1603.01547. [8] CHEN D Q, BOLTON J, MANNING C D.A thorough examination of the CNN/daily mail reading comprehension task[EB/OL].[2021-06-20].https://arxiv.org/abs/1606.02858. [9] SEO M, KEMBHAVI A, FARHADI A, et al.Bidirectional attention flow for machine comprehension[EB/OL].[2021-06-20].https://arxiv.org/abs/1611.01603. [10] CHEN D Q, FISCH A, WESTON J, et al.Reading Wikipedia to answer open-domain questions[EB/OL].[2021-06-20].https://arxiv.org/abs/1704.00051v2. [11] WANG W H, YANG N, WEI F R, et al.Gated self-matching networks for reading comprehension and question answering[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.Washington D.C., USA:IEEE Press, 2017:189-198. [12] HUANG H Y, ZHU C G, SHEN Y L, et al.FusionNet:fusing via fully-aware attention with application to machine comprehension[EB/OL].[2021-06-20].https://arxiv.org/abs/1711.07341. [13] VASWANI A, SHAZEER N, PARMAR N, et al.Attention is all you need[C]//Proceedings of NIPSʼ17.Cambridge, USA:MIT Press, 2017:5998-6008. [14] CUI Y M, LIU T, CHEN Z, et al.Consensus attention-based neural networks for Chinese reading comprehension[EB/OL].[2021-06-20].https://arxiv.org/abs/1607.02250. [15] CUI Y M, LIU T, CHE W X, et al.A span-extraction dataset for Chinese machine reading comprehension[EB/OL].[2021-06-20].https://arxiv.org/abs/1810.07366. [16] CUI Y M, LIU T, YANG Z Q, et al.A sentence cloze dataset for Chinese machine reading comprehension[EB/OL].[2021-06-20].https://arxiv.org/abs/2004.03116. [17] HE W, LIU K, LIU J, et al.Dureader:a chinese machine reading comprehension dataset from real-world applications[C]//Proceedings of Workshop on Machine Reading for Question Answering.Washington D.C., USA:IEEE Press, 2018:37-46. [18] 徐丽丽, 李茹, 李月香, 等.面向机器阅读理解的语句填补答案选择方法[J].计算机工程, 2018, 44(7):183-187, 192. XU L L, LI R, LI Y X, et al.Answer selection method of sentence filling for machine reading comprehension[J].Computer Engineering, 2018, 44(7):183-187, 192.(in Chinese) [19] SHAO C C, LIU T, LAI Y T, et al.DRCD:a Chinese machine reading comprehension dataset[EB/OL].[2021-06-20].https://arxiv.org/abs/1806.00920. [20] CUI Y M, CHE W X, LIU T, et al.Pre-training with whole word masking for Chinese BERT[EB/OL].[2021-06-20].https://arxiv.org/abs/1906.08101. [21] TAY Y, BAHRI D, METZLER D, et al.Synthesizer:rethinking self-attention in transformer models[EB/OL].[2021-06-20].https://arxiv.org/abs/2005.00743. [22] CUI Y M, CHE W X, LIU T, et al.Revisiting pre-trained models for Chinese natural language processing[EB/OL].[2021-06-20].https://arxiv.org/abs/2004.13922. |