1 |
DENG C Y , ZENG G F , CAI Z P , et al. A survey of knowledge based question answering with deep learning. Journal on Artificial Intelligence, 2020, 2 (4): 157- 166.
doi: 10.32604/jai.2020.011541
|
2 |
HUANG X, KIM J J, ZOU B W. Unseen entity handling in complex question answering over knowledge base via language generation[C]//Proceedings of EMNLP 2021. Stroudsburg, USA: Association for Computational Linguistics, 2021: 547-557. 10.18653/v1/2021.findings-emnlp.50
|
3 |
PÉREZ J , ARENAS M , GUTIERREZ C . Semantics and complexity of SPARQL. ACM Transactions on Database Systems, 2009, 34 (3): 1- 45.
URL
|
4 |
YE X, YAVUZ S, HASHIMOTO K, et al. RNG-KBQA: generation augmented iterative ranking for knowledge base question answering[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers). Stroudsburg, USA: Association for Computational Linguistics, 2022: 6032-6043. 10.18653/v1/2022.acl-long.417
|
5 |
HU X, WU X, SHU Y, et al. Logical form generation via multi-task learning for complex question answering over knowledge bases[C]//Proceedings of the 29th International Conference on Computational Linguistics. Gyeongju, Republic of Korea: [s. n. ], 2022: 1687-1696.
|
6 |
何展鹏. 基于知识库标记预训练孪生神经网络的中文实体链接. 计算机科学与应用, 2022, 12 (4): 1202- 1212.
doi: 10.12677/CSA.2022.124122
|
|
HE Z P . Knowledge marker-based pre-trained language model with siamese network for Chinese entity linking. Computer Science and Application, 2022, 12 (4): 1202- 1212.
doi: 10.12677/CSA.2022.124122
|
7 |
XUE L T, CONSTANT N, ROBERTS A, et al. mT5: a massively multilingual pre-trained text-to-text transformer[EB/OL]. [2023-05-10]. https://arxiv.org/pdf/2010.11934.
|
8 |
ROSSIELLO G, MIHINDUKULASOORIYA N, ABDELAZIZ I, et al. Generative relation linking for question answering over knowledge bases[C]//Proceedings of International Semantic Web Conference. Berlin, Germany: Springer, 2021: 321-337. 10.48550/arXiv.2108.07337
|
9 |
DUAN N. Overview of the NLPCC-ICCPOL 2016 shared task: open domain Chinese question answering[C]//Proceedings of NLPCC 2016 and ICCPOL 2016. Berlin, Germany: Springer, 2016: 942-948. 10.1007/978-3-319-50496-4_89
|
10 |
DONG G T, LI R M, WANG S R, et al. Bridging the KB-text gap: leveraging structured knowledge-aware pre-training for KBQA[C]//Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. New York, USA: ACM Press, 2023: 3854-3859.
|
11 |
COLIN R , NOAM S , ADAM R , et al. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020, 21 (1): 5485- 5551.
|
12 |
LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2020: 7871-7880.
|
13 |
ZHANG L X, ZHANG J, WANG Y L, et al. FC-KBQA: a fine-to-coarse composition framework for knowledge base question answering[EB/OL]. [2023-05-10]. https://arxiv.org/pdf/2306.14722.
|
14 |
SEVGILI Ö , SHELMANOV A , ARKHIPOV M , et al. Neural entity linking: a survey of models based on deep learning. Semantic Web, 2022, 13 (3): 527- 570.
doi: 10.3233/SW-222986
|
15 |
CHEN Y , WAN W B , ZHAO Y M , et al. Generalization performance optimization of KBQA system for Chinese open domain. Multimedia Tools and Applications, 2024, 83 (5): 12445- 12466.
URL
|
16 |
ZHANG W T, JIANG S H, ZHAO S, et al. A BERT-BiLSTM-CRF model for Chinese electronic medical records named entity recognition[C]//Proceedings of the 12th International Conference on Intelligent Computation Technology and Automation (ICICTA). Washington D. C., USA: IEEE Press, 2019: 166-169. 10.1109/ICICTA49267.2019.00043
|
17 |
KANNAN RAVI M P, SINGH K, MULANG' I O, et al. CHOLAN: a modular approach for neural entity linking on Wikipedia and Wikidata[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg, USA: Association for Computational Linguistics, 2021: 504-514. 10.48550/arXiv.2101.09969
|
18 |
|
19 |
LIU P F , YUAN W Z , FU J L , et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023, 55 (9): 1- 35.
URL
|
20 |
SCHICK T, SCHVTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[EB/OL]. [2023-05-10]. https://arxiv.org/abs/2001.07676.
|
21 |
|
22 |
|
23 |
|
24 |
ZHONG W J, GAO Y F, DING N, et al. ProQA: structural prompt-based pre-training for unified question answering[EB/OL]. [2023-05-10]. https://arxiv.org/abs/2205.04040.
|
25 |
LV X, LIN Y K, CAO Y X, et al. Do pre-trained models benefit knowledge graph completion? A reliable evaluation and a reasonable approach[C]//Proceedings of the Findings of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2022: 1-10. 10.18653/v1/2022.findings-acl.282
|
26 |
TAN C Y, CHEN Y H, SHAO W B, et al. Make a choice! Knowledge base question answering with In-context learning[EB/OL]. [2023-05-10]. https://arxiv.org/abs/2305.13972.
|
27 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2023-05-10]. https://arxiv.org/abs/1810.04805.
|
28 |
TJONG KIM SANG E F, de MEULDER F. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition[C]//Proceedings of the 17th Conference on Natural Language learning at HLT-NAACL 2003. Stroudsburg, USA: Association for Computational Linguistics, 2003: 1-10. 10.48550/arXiv.cs/0306050
|
29 |
|
30 |
LIU A T, HUANG Z Q, LU H T, et al. BB-KBQA: BERT-based knowledge base question answering[C]//Proceedings of China National Conference on Chinese Computational Linguistics. Berlin, Germany: Springer, 2019: 81-92. 10.1007/978-3-030-32381-3_7
|
31 |
|
|
|
32 |
LIU W J , ZHOU P , ZHAO Z , et al. K-BERT: enabling language representation with knowledge graph. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34 (3): 2901- 2908.
doi: 10.1609/aaai.v34i03.5681
|
33 |
PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting on Association for Computational Linguistic. Stroudsburg, USA: Association for Computational Linguistics, 2001: 311-318.
|
34 |
|
35 |
BANERJEE S, LAVIE A. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments[C]//Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Stroudsburg, USA: Association for Computational Linguistics, 2005: 65-72.
|