[1] MASS Y, CARMELI B, ROITMAN H, et al. Unsupervised FAQ retrieval with question generation and BERT[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.[S. l.]:Association for Computational Linguistics, 2020:807-812. [2] HEILMAN M, SMITH N A. Good question!Statistical ranking for question generation[C]//Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Los Angeles, USA:Association for Computational Linguistics, 2010:609-617. [3] TRIVEDI H, BALASUBRAMANIAN N,KHOT T, et al. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics. Los Angeles, USA:Association for Computational Linguistics, 2023:10014-10037. [4] FEI Z, ZHOU X, GUI T, et al. LFKQG:a controlled generation framework with local fine-tuning for question generation over knowledge bases[C]//Proceedings of International Conference on Computational Linguistics. Gyeongju, Republic of Korea:International Committee on Computational Linguistics, 2022:6575-6585. [5] ELSAHAR H, GRAVIER C, LAFOREST F. Zero-shot question generation from knowledge graphs for unseen predicates and entity types[C]//Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics. New Orleans, USA:Association for Computational Linguistics, 2018:218-228. [6] 陈跃鹤,贾永辉,谈川源,等.基于知识图谱全局和局部特征的复杂问答方法[J].软件学报, 2023, 34(12):5614-5628. CHEN Y H, JIA Y H, TAN C Y, et al. Method for complex question answering based on global and local features of knowledge graph[J]. Journal of Software, 2023, 34(12):5614-5628.(in Chinese) [7] BORDES A, USUNIER N, CHOPRA S, et al. Large-scale simple question answering with memory networks[EB/OL].[2023-05-10]. https://arxiv.org/abs/1506.02075. [8] BERANT J, CHOU A, FROSTIG R, et al. Semantic parsing on freebase from question-answer Pairs[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA:Association for Computational Linguistics, 2013:1533-1544. [9] ZHOU M, HUANG M, ZHU X. An interpretable reasoning network for multi-relation question answering[C]//Proceedings of International Conference on Computational Linguistics. Santa Fe, USA:Association for Computational Linguistics, 2018:2010-2022. [10] DHOLE K D, MANNING C D. Syn-QG:syntactic and shallow semantic rules for question generation[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics.[S. l.]:Association for Computational Linguistics, 2020:752-765. [11] SERBAN I V,GARCÍA-DURÁN A, GULCEHRE C, et al. Generating factoid questions with recurrent neural networks:the 30m factoid question-answer corpus[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics. Berlin, Germany:Association for Computer Linguistics, 2018:2010-2022. [12] WANG H, ZHANG X, WANG H. A neural question generation system based on knowledge base[C]//Proceedings of NLPCC 2018. Berlin, Germany:Springer, 2018:133-142. [13] 叶子,陈小平,张波,等.融合预训练模型的中文知识图谱问题生成方法[J].小型微型计算机系统, 2021, 42(2):246-250. YE Z, CHEN X P, ZHANG B, et al. Method for Chinese knowledge graph question generation based on pre-trained model[J]. Journal of Chinese Computer Systems, 2021, 42(2):246-250.(in Chinese) [14] BI S, CHENG X, LI Y F, et al. Knowledge-enriched, type-constrained and grammar-guided question generation over knowledge bases[C]//Proceedings of International Conference on Computational Linguistics. Barcelona, Spain:International Committee on Computational Linguistics, 2020:2776-2786. [15] 胡月,周光有.基于Graph Transformer的知识库问题生成[J].中文信息学报, 2022, 36(2):111-120. HU Y, ZHOU G Y. Question generation from knowledge base with Graph Transformer[J]. Journal of Chinese Information Processing, 2022, 36(2):111-120.(in Chinese) [16] HAN K, FERREIRA T C, GARDENT C. Generating questions from Wikidata triples[C]//Proceedings of the 13th Conference on Language Resources and Evaluation. Marseille, France:European Language Resources Association, 2022:277-290. [17] LEWIS M, LIU Y, GOYAL N, et al. BART:denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of Annual Meeting of the Association for Computational Linguistics.[S. l.]:Association for Computational Linguistics, 2020:7871-7880. [18] DEVLIN J, CHANG M W, LEE K, et al. BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Minneapolis, USA:Association for Computational Linguistics, 2019:4171-4186. [19] PAPINENI K, ROUKOS S, WARD T, et al. BLEU:a method for automatic evaluation of machine translation[C]//Proceedings of Annual Meeting on Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2002:311-318. [20] DENKOWSKI M, LAVIE A. Meteor universal:language specific translation evaluation for any target language[C]//Proceedings of the 9th Workshop on Statistical Machine Translation. Baltimore, USA:Association for Computational Linguistics,2014:376-380. [21] LIN C Y. ROUGE:a package for automatic evaluation of summaries[C]//Proceedings of Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL. Barcelona, Spain:Association for Computational Linguistics, 2004:74-81. [22] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[M]//GUYON I, von LUXBURG U, BENGIO S. Advances in neural information processing systems. Long Beach, USA:Curran Associates, Inc., 2017:5998-6008. [23] KUMAR V, HUA Y C, RAMAKRISHNAN G, et al. Difficulty-controllable multi-hop question generation from knowledge graphs[C]//Proceedings of the 18th International Semantic Web Conference. New York, USA:ACM Press, 2019:382-398. [24] CHEN Y, WU L F, ZAKI M J. Toward subgraph-guided knowledge graph question generation with graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024:1-12. [25] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1):5485-5551. |