[1] CHOI E,WELD D S.TriviaQA:a large scale distantly supervised challenge dataset for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.Seattle,USA:[s.n.],2017:17-47. [2] HE Wei,LIU Kai,LIU Jing,et al.Dureader:a chinese machine reading comprehension dataset from real-world applications[EB/OL].[2019-08-10].https://www.researchgate.net/publication. [3] YANG Zilin,QI Peng,ZHANG Saizheng,et al.Hotpotqa:a dataset for diverse,explainable multi-hop question answering[C]//Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing.Montreal,Canada:[s.n.],2018:1-10. [4] DAI Z,XIONG C,CALLAN J,et al.Convolutional neural networks for soft-matching n-grams in ad-hoc search[C]//Proceedings of the 11th ACM International Conference on Web Search and Data Mining.New York,USA:ACM Press,2018:126-134. [5] BLANCO R,OTTAVIANO G,MEIJ E.Fast and space-efficient entity linking for queries[C]//Proceedings of the 8th ACM International Conference on Web Search and Data Mining.New York,USA:ACM Press,2015:179-188. [6] SANH V,WOLF T,RUDER S.A hierarchical multi-task approach for learning embeddings from semantic tasks[C]//Proceedings of AAAI Conference on Artificial Intelligence.[S.1.]:AAAI Press,2018:125-136. [7] CHUNG J,GULCEHRE C,CHO K H,et al.Empirical evaluation of gated recurrent neural networks on sequence modeling[EB/OL].[2019-08-10].https://www.researchgate.net/publication. [8] MIKOLOV T,KARAFIAT M,BURGET L,et al.Recurrent neural network based language model[C]//Proceedings of the 11th Annual Conference of the International Speech Communication Association.Washington D.C.,USA:IEEE Press,2010:365-378. [9] BAHDANAU D,CHO K,BENGIO Y.Neural machine translation by jointly learning to align and translate[EB/OL].[2019-08-10].https://arxiv.org/abs/1409. 0473. [10] VINYALS O,FORTUNATO M,JAITLY N.Pointer networks[C]//Proceedings of ANIPS'15.Washington D.C.,USA:IEEE Press,2015:2692-2700. [11] DDVLIN J,CHANG M W,LEE K,et al.BERT:pre-training of deep bidirectional transformers for language understanding[EB/OL].[2019-08-10].https://arxiv.org/abs/1810.04805. [12] RAJPURKAR P,ZHANG J,LOPYREV K,et al.Squad:100000+ questions for machine comprehension of text[C]//Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing.Washington D.C.,USA:IEEE Press,2016:158-169. [13] WANG Shuohang,JIANG Jing.Machine comprehension using match-LSTM and answer pointer[EB/OL].[2019-08-10].https://www.researchgate.net/publication/307302995. [14] SEO M,KEMBHAVI A,FARHADI A,et al.Bidirectional attention flow for machine comprehension[EB/OL].[2019-08-10].https://www.researchgate.net/publication/309738677. [15] HU Menghao,PENG Yuxing,HUANG Zhen,et al.Reinforced mnemonic reader for machine reading comprehension[EB/OL].[2019-08-10].https://www.researchgate.net/publication/316780375. [16] TRISCHLER A,YE Z,YUAN X,et al.Natural language comprehension with the epireader[C]//Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing.Austin,USA:Association for Computational Linguistics,2016:128-137. [17] CLARK C,GARDNER M.Simple and effective multi-paragraph reading comprehension[EB/OL].[2019-08-10].https://arxiv.org/abs/1710.10723. [18] WANG Shuohang,YU Mo,GUO Xiaoxiao,et al.Reinforced reader-ranker for open-domain question answering[EB/OL].[2019-08-10].https://arxiv.org/pdf/1709.00023v2.pdf. [19] WANG Yizhang,LIU Kai,LIU Jing,et al.Multi-passage machine reading comprehension with cross-passage answer verification[EB/OL].[2019-08-10].https://arxiv.org/pdf/1805.02220.pdf. [20] LIU Jiahua,WEI Wan,CHEN Hao,et al.Machine reading comprehension for multi-document and multi-answer[J].Journal of Chinese Information Processing,2018,32(11):103-111.(in Chinese)刘家骅,韦琬,陈灏,等.基于多篇章多答案的阅读理解系统[J].中文信息学报,2018,32(11):103-111. [21] WANG Zhiqiang,LI Ru,LIANG Jiye.Research on question answering for reading comprehension based on Chinese discourse frame semantic parsing[J].Chinese Journal of Computers,2016,39(4):155-167.(in Chinese)王智强,李茹,梁吉业.基于汉语篇章框架语义分析的阅读理解问答研究[J].计算机学报,2016,39(4):155-167. [22] PENNINGTON J,SOCHER R,MANNING C.GloVe:global vectors for word representation[C]//Proceedings of IEEE Conference on Empirical Methods in Natural Language Processing.Washington D.C.,USA:IEEE Press,2014:1532-1543. [23] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[C]//Proceedings of IEEE AIPS'12.Washington D.C.,USA:IEEE Press,2012:1097-1105. [24] CHEN D,FISCH A,WESTON J,et al.Reading Wikipedia to answer open-domain questions[EB/OL].[2019-08-10].https://arxiv.org/abs/1704.00051. [25] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al.Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:770-778. [26] KINGMA D P,BA J.Adam:a method for stochastic optimization[EB/OL].[2019-08-10].http://arxiv.org/abs/1412.6980. |