[1] ZHANG Haoyu,ZHANG Pengfei,LI Zhenzhen,et al.Self-attention based machine reading comprehension[J].Journal of Chinese Information Processing,2018,32(12):125-131.(in Chinese)张浩宇,张鹏飞,李真真,等.基于自注意力机制的阅读理解模型[J].中文信息学报,2018,32(12):125-131. [2] KARL M H,TOMAS K,EDWARD G,et al.Teaching machines to read and comprehend[C]//Proceedings of Annual Conference on Neural Information Processing Systems.Montreal,Canada:MIT Press,2015:1693-1701. [3] XU Lili,LI Ru,LI Yuexiang,et al.Answer selection method of sentence filling for machine reading comprehension[J].Computer Engineering,2018,44(7):183-187,192.(in Chinese)徐丽丽,李茹,李月香,等.面向机器阅读理解的语句填补答案选择方法[J].计算机工程,2018,44(7):183-187,192. [4] LÜ Guoying,GUAN Yong,LI Ru,et al.Solution of text title selection based on neural network[J].Computer Engineering,2018,44(9):171-176.(in Chinese)吕国英,关勇,李茹,等.基于神经网络的篇章标题选择求解[J].计算机工程,2018,44(9):171-176. [5] YIN Yichun,ZHANG Ming.A neural machine reading comprehension model based on relabeling and rich features[J].Journal of Chinese Information Processing,2018,32(11):112-116.(in Chinese)尹伊淳,张铭.一种基于数据重构和富特征的神经网络机器阅读理解模型[J].中文信息学报,2018,32(11):112-116. [6] FELIX H,ANTOINE B,SUMIT C,et al.The goldilocks principle:reading children's books with explicit memory representations[C]//Proceedings of International Conference on Learning Representations.San Juan,Puerto Rico:[s.n.],2016:228-236. [7] KADLEC R,SCHMID M,BAJGAR O,et al.Text understanding with the attention sum reader network[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.Berlin,Germany:Springer,2016:1-11. [8] BHUWAN D,LIU H X,YANG Z L,et al.Gated attention readers for text comprehension[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics.Vancouver,Canada:Association for Computational Linguistics,2017:1832-1846. [9] SORDONI A,BACHMAN P,BENGIO Y.Iterative alternating neural attention for machine reading[EB/OL].[2019-06-20].https://www.researchgate.net/profile/Philip_Bachman/publication/303840417_Iterative_Alternating_Neural_Attention_for_Machine_Reading/links/59921e9daca27289539bb860/Iterative-Alternating-Neural-Attention-for-Machine-Reading.pdf. [10] CHEN D Q,BOLTON J,MANNING C D.A thorough examination of the CNN/daily mail reading comprehension task[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.Berlin,Germany:Springer,2016:2359-2367. [11] RAJPURKAR P,ZHANG J,LOPYREV K,et al.SQuAD:100,000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.New York,USA:Association for Computational Linguistics,2016:2383-2392. [12] TRISCHLER A,WANG T,YUAN X D,et al.NewsQA:a machine comprehension dataset[C]//Proceedings of the 2nd Workshop on Representation Learning for NLP.New York,USA:Association for Computational Linguistics,2017:191-200. [13] MIN J S,ANIRUDDHA K,ALI F,et al.Bidirectional attention flow for machine comprehension[EB/OL].[2019-06-20].https://arxiv.org/pdf/1611.01603.pdf. [14] Natural Language Computing.R-NET:machine reading comprehension with self-matching networks[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics.New York,USA:Association for Computational Linguistics,2017:1-11. [15] YU A W,DOHAN D,LUONG M T,et al.QANet:combining local convolution with global self-attention for reading comprehension[EB/OL].[2019-06-20].https://arxiv.org/abs/1804.09541. [16] ZHU Haichao,WEI Furu,QIN Bing,et al.Hierarchical attention flow for multiple-choice reading comprehension[C]//Proceedings of AAAI Conference on Artificial Intelligence.New Orleans,USA:[s.n.],2018:6077-6085. [17] WANG Shuohang,YU Mo,CHANG Shiyu,et al.A co-matching model for multi-choice reading comprehension[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.New York,USA:Association for Computational Linguistics,2018:746-751. [18] BAI S J,KOLTER J Z,KOLTUN V.An empirical evaluation of generic convolutional and recurrent networks for sequence modeling[EB/OL].[2019-06-20].https://arxiv.org/abs/1803.01271. [19] LAI Guokun,XIE Qizhe,LIU Hanxiao,et al.RACE:large-scale ReAding comprehension dataset from examinations[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,Copenhagen.New York,USA:Association for Computational Linguistics,2017:785-794. [20] HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural Computation,1997,9(8):1735-1780. [21] CHEN Q,ZHU X,LING Z H,et al.Natural language inference with external knowledge[EB/OL].[2019-06-20].https://arxiv.org/pdf/1711.04289.pdf. [22] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al.Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:770-778. [23] SALIMANS T,KINGMA D P.Weight normalization:a simple reparameterization to accelerate training of deep neural networks[C]//Proceedings of Annual Conference on Neural Information Processing Systems.Barcelona,Spain:MIT Press,2016:901-906. [24] SRIVASTAVA N,HINTON G,KRIZHEVSKY A,et al.Dropout:a simple way to prevent neural networks from overfitting[J].Journal of Machine Learning Research,2014,15(1):1929-1958. [25] PARIKH S,SAI A,NEMA P,et al.ElimiNet:a model for eliminating options for reading comprehension with multiple choice questions[C]//Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence.California,USA:International Joint Conferences on Artificial Intelligence Organization,2018:4272-4278. [26] TAY Y,TUAN L,HUI S.Multi-range reasoning for machine comprehension[EB/OL].[2019-06-20].https://arxiv.org/abs/1803.09074. |