[1] CHEN D Q.Neural reading comprehension and beyond[D].Palo Alto, USA:Stanford University, 2018. [2] LIU S S, ZHANG X, ZHANG S, et al.Neural machine reading comprehension:methods and trends[EB/OL].(2019-07-02)[2021-02-10].https://arxiv.org/pdf/1907.01118v1.pdf. [3] MIN S, ZHONG V, ZETTLEMOYER L, et al.Multi-hop reading comprehension through question decomposition and rescoring[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:6097-6109. [4] TALMOR A, BERANT J.The Web as a knowledge-base for answering complex questions[C]//Proceedings of 2018 Conference of North American Chapter of the Association for Computational Linguistics:Human Language Technologies.[S.l.]:Association for Computational Linguistics, 2018:1-5. [5] JIANG Y, BANSAL M.Self-assembling modular networks for interpretable multi-hop reasoning[C]//Proceedings of International Joint Conference on Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2019:4473-4483. [6] PEREZ E, LEWIS P, CHO K, et al.Unsupervised question decomposition for question answering[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2020:8864-8880. [7] SONG L, WANG Z, YU M, et al.Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks[EB/OL].(2018-09-06)[2021-02-10].https://arxiv.org/pdf/1809.02040.pdf. [8] CAO N, AZIZ W, TITOV I, et al.Question answering by reasoning across documents with graph convolutional networks[C]//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.[S.l.]:Association for Computational Linguistics, 2019:2306-2317. [9] TU M, WANG G, HUANG J, et al.Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:2704-2713. [10] CAO Y, FANG M, TAO D, et al.BAG:bi-directional attention entity graph convolutional network for multi-hop reasoning question answering[C]//Proceedings of 2018 Conference of North American Chapter of the Association for Computational Linguistics:Human Language Technologies.[S.l.]:Association for Computational Linguistics, 2019:357-362. [11] QIU L, XIAO Y, QU Y, et al.Dynamically fused graph network for multi-hop reasoning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:6140-6150. [12] FANG Y W, SUN S Q, GAN Z, et al.Hierarchical graph network for multi-hop question answering[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2020:8823-8838. [13] THAYAPARAN M, VALENTINO M, SCHLEGEL V, et al.Identifying supporting facts for multi-hop question answering with document graph networks[EB/OL].(2019-10-01)[2021-02-10].https://arxiv.org/pdf/1910.00290.pdf. [14] TU M, HUANG K, WANG G T, et al.Select, answer and explain:interpretable multi-hop reading comprehension over multiple documents[C]//Proceedings of 2020 AAAI Conference on Artificial Intelligence.[S.l.]:AAAI, 2020:9073-9080. [15] DING M, ZHOU C, CHEN Q, et al.Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks[EB/OL].(2018-09-06)[2021-02-10].https://arxiv.org/pdf/1809.02040.pdf. [16] YE D, LIN Y, LIU Z, et al.Multi-paragraph reasoning with knowledge-enhanced graph neural network[EB/OL].(2019-11-06)[2021-02-10].https://arxiv.org/pdf/1911.02170.pdf. [17] TANG Z Y, SHEN Y L, MA X Y, et al.Multi-hop reading comprehension across documents with path-based graph convolutional network[C]//Proceedings of ITCAI-PRICA 2020.Yokohama, Japan:[s.n.], 2020:1-7. [18] SHAO N, CUI Y M, LIU T, et al.Is graph structure necessary for multi-hop reasoning?[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2020:7187-7192. [19] CHEN D, FISCH A, WESTON J, et al.Reading Wikipedia to answer open-domain questions[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2017:1870-1879. [20] FELDMAN Y, ELYANIV R.Multi-hop paragraph retrieval for open-domain question answering[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:2296-2309. [21] MIN S, WALLACE E, SINGH S, et al.Compositional questions do not necessitate multi-hop reasoning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:4249-4257. [22] DAS R, DHULIAWALA S, ZAHEER M, et al.Multi-step retriever-reader interaction for scalable open-domain question answering[C]//Proceedings of International Conference on Learning Representations.New Orleans, USA:DBLP, 2019:1-13. [23] QI P, LIN X, MEHR L, et al.Answering complex open-domain questions through iterative query generation[C]//Proceedings of International Joint Conference on Natural Language Processing.Berlin, Germany:Springer, 2019:2590-2602. [24] XIONG W, YU M, GUO X, et al.Simple yet effective bridge reasoning for open-domain multi-hop question answering[C]//Proceedings of EMNLP-IJCNLP 2019.Honk Kong, China:[s.n.]:2019:1-5. [25] GODBOLE A, KAVARTHAPU D, DAS R, et al.Multi-step entity-centric information retrieval for multi-hop question answering[EB/OL].(2019-09-17)[2021-02-10].https://arxiv.org/pdf/1909.07598.pdf. [26] NISHIDA K, NAGATA M, MASAAKI N.Answering while summarizing:multi-task learning for multi-hop QA with evidence extraction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:2335-2345. [27] YADAV V, BETHARD S, SURDEANU M, et al.Quick and (not so) dirty:unsupervised selection of justification sentences for multi-hop question answering[EB/OL].(2020-05-03)[2021-02-10].https://arxiv.org/pdf/1911.07176v2.pdf. [28] QUENTIN G, JULIEN P, ERIC G.Latent question reformulation and information accumulation for multi-hop machine reading[EB/OL].[2021-02-10].https://openreview.net/forum?id=S1x63TEYvr. [29] CHEN J F, LIN S T, DURRETT G, et al.Multi-hop question answering via reasoning chains[EB/OL].[2021-02-10].https://arxiv.org/pdf/1910.02610.pdf. [30] JIANG Y, JOSHI N, CHEN Y, et al.Explore, propose, and assemble:an interpretable model for multi-hop reading comprehension[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:2714-2725. [31] ASAI A, HASHIMOTO K, HAJISHIRZI H, et al.Learning to retrieve reasoning paths over Wikipedia graph for question answering[EB/OL].(2020-02-14)[2021-02-10].https://arxiv.org/pdf/1911.10470.pdf. [32] YADAV V, BETHARD S, SURDEANU M.Unsupervised alignment-based iterative evidence retrieval for multi-hop question answering[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2020:1-5. [33] ZHONG V, XIONG C, KESKAR N S, et al.Coarse-grain fine-grain coattention network for multi-evidence question answering[EB/OL].(2019-05-13)[2021-02-10].https://arxiv.org/pdf/1901.00603.pdf. [34] ZHUANG Y, WANG H.Token-level dynamic self-attention network for multi-passage reading comprehension[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:1-5. [35] ZHAO C, XIONG C, ROSSET C, et al.Transformer-XH:multi-evidence reasoning with eXtra hop attention[C]//Proceedings of International Conference on Learning Representations.New Orleans, USA:DBLP, 2019:1-5. [36] BELTAGY I, PETERS M E, COHAN A.Longformer:the long-document transformer[EB/OL].(2020-12-02)[2021-02-10].https://arxiv.org/pdf/2004.05150v2.pdf. [37] ZAHEER M, GURUGANESH G, DUBEY A, et al.Big Bird:transformers for longer sequences[EB/OL].(2021-01-08)[2021-02-10].https://arxiv.org/pdf/2007.14062.pdf. [38] BAUER L, WANG Y, BANSAL M, et al.Commonsense for generative multi-hop question answering tasks[EB/OL].(2019-06-01)[2021-02-10].https://arxiv.org/pdf/1809.06309.pdf. [39] KOCISKÝ T, SCHWARZ J, BLUNSOM P, et al.The NarrativeQA reading comprehension challenge[EB/OL].(2017-12-19)[2021-02-10].https://arxiv.org/pdf/1712.07040.pdf. [40] TRIVEDI H, KEON H, KHOT T, et al.Repurposing entailment for multi-hop question answering tasks[C]//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.[S.l.]:Association for Computational Linguistics, 2019:2948-2958. [41] WELBL J, STENETORP P, RIEDEL S, et al.Constructing datasets for multi-hop reading comprehension across documents[J].Transactions of the Association for Computational Linguistics, 2018, 6:287-302. [42] JOSHI M, CHOI E, WELD D S, et al.TriviaQA:a large scale distantly supervised challenge dataset for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2017:1601-1611. [43] YANG Z, QI P, ZHANG S, et al.HotpotQA:a dataset for diverse, explainable multi-hop question answering[C]//Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2018:2369-2380. [50] JIANG Y, BANSAL M.Avoiding reasoning shortcuts:adversarial evaluation, training, and model development for multi-hop QA[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:2726-2736. [44] JOSHI M, CHEN D, LIU Y, et al.SpanBERT:improving pre-training by representing and predicting spans[J].Transactions of the Association for Computational Linguistics, 2020, 8:64-77. [45] BACK S, YU S, INDURTHI S R, et al.MemoReader:large-scale reading comprehension through neural memory controller[C]//Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2018:2131-2140. [46] CLARK C, GARDNER M.Simple and effective multi-paragraph reading comprehension[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2018:1-5. [47] WEISSENBORN D, KOČISKÝ T, DYER C.Dynamic integration of background knowledge in neural NLU systems[EB/OL].(2018-08-21)[2021-02-10].https://arxiv.org/pdf/1706.02596.pdf. [48] HU M, PENG Y, HUANG Z, et al.Reinforced mnemonic reader for machine reading comprehension[EB/OL].(2018-06-06)[2021-02-10].https://arxiv.org/pdf/1705.02798v6.pdf. [49] PAN B, LI H, ZHAO Z, et al.MEMEN:multi-layer embedding with memory networks for machine comprehension[EB/OL].(2017-07-28)[2021-02-10].https://arxiv.org/pdf/1707.09098.pdf. |