| 1 |
MIN S, ZHONG V, SOCHER R, et al. Efficient and robust question answering from minimal context over documents[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1805.08092.
|
| 2 |
LIN Y K, JI H Z, LIU Z Y, et al. Denoising distantly supervised open-domain question answering[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, USA: ACL Press, 2018: 1736-1745.
|
| 3 |
|
| 4 |
ZHANG S, LIU X, LIU J, et al. ReCoRD: bridging the gap between human and machine commonsense reading comprehension[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1810.12885.
|
| 5 |
KHASHABI D, CHATURVEDI S, ROTH M, et al. Looking beyond the surface: a challenge set for reading comprehension over multiple sentences[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1(Long Papers). Stroudsburg, USA: ACL Press, 2018: 252-262.
|
| 6 |
CLARK C, LEE K, CHANG M W, et al. BoolQ: exploring the surprising difficulty of natural yes/no questions[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1905.10044.
|
| 7 |
BERANT J, CHOU A, FROSTIG R, et al. Semantic parsing on freebase from question-answer pairs[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: ACL Press, 2013: 1533-1544.
|
| 8 |
|
| 9 |
LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1910.13461.
|
| 10 |
RAFFEL C , SHAZEER N , ROBERTS A , et al. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020, 21 (1): 5485- 5551.
doi: 10.48550/arXiv.1910.10683
|
| 11 |
|
| 12 |
ROBERTS A, RAFFEL C, SHAZEER N. How much knowledge can you pack into the parameters of a language model?[EB/OL]. [2024-06-11]. https://arxiv.org/abs/2002.08910.
|
| 13 |
|
| 14 |
CHUNG H W , HOU L , LONGPRE S , et al. Scaling instruction-finetuned language models. The Journal of Machine Learning Research, 2024, 25 (1): 3381- 3433.
URL
|
| 15 |
|
| 16 |
OUYANG L , WU J , JIANG X , et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 2022, 35, 27730- 27744.
|
| 17 |
PRAGER J . Open-domain question-answering. Foundations and Trends® in Information Retrieval, 2006, 1 (2): 191- 231.
|
| 18 |
WANG A, SINGH A, MICHAEL J, et al. GLUE: a multi-task benchmark and analysis platform for natural language understanding[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1804.07461.
|
| 19 |
RADFORD A , WU J , CHILD R , et al. Language models are unsupervised multitask learners. OpenAI Blog, 2019, 1 (8): 9.
|
| 20 |
LEWIS P , PEREZ E , PIKTUS A , et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 2020, 33, 9459- 9474.
doi: 10.48550/arXiv.2005.11401
|
| 21 |
|
| 22 |
IZACARD G, GRAVE E. Leveraging passage retrieval with generative models for open domain question answering[EB/OL]. [2024-06-11]. https://arxiv.org/abs/2007.01282.
|
| 23 |
CLARK P, COWHEY I, ETZIONI O, et al. Think you have solved question answering? Try arc, the AI2 reasoning challenge[EB/OL]. [2024-06-11]. https://arxiv.org/abs/1803.05457.
|
| 24 |
XIE Z, THIEM S, MARTIN J, et al. WorldTree V2: a corpus of science-domain structured explanations and inference patterns supporting multi-hop inference[C]//Proceedings of the 12th Language Resources and Evaluation Conference. Berlin, Germany: Springer, 2020: 5456-5473.
|
| 25 |
|
| 26 |
|
| 27 |
|
| 28 |
PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, USA: ACL Press, 2002: 311-318.
|
| 29 |
SELLAM T, DAS D, PARIKH A P. BLEURT: learning robust metrics for text generation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Philadelphia, USA: ACL Press, 2020: 7881-7892.
|
| 30 |
刘全, 梁斌, 徐进, 等. 一种用于基于方面情感分析的深度分层网络模型. 计算机学报, 2018, 41 (12): 2637- 2652.
doi: 10.11897/SP.J.1016.2018.02637
|
|
LIU Q , LIANG B , XU J , et al. A deep hierarchical neural network model for aspect-based sentiment analysis. Chinese Journal of Computers, 2018, 41 (12): 2637- 2652.
doi: 10.11897/SP.J.1016.2018.02637
|
| 31 |
栾克鑫, 杜新凯, 孙承杰, 等. 基于注意力机制的句子排序方法. 中文信息学报, 2018, 32 (1): 123- 130.
|
|
LUAN K X , DU X K , SUN C J , et al. Sentence ordering based on attention mechanism. Journal of Chinese Information Processing, 2018, 32 (1): 123- 130.
|
| 32 |
周强伟, 施水才, 王洪俊. 基于预训练模型的受控文本生成研究综述. 软件导刊, 2024, 23 (4): 199- 207.
|
|
ZHOU Q W , SHI S C , WANG H J . Overview of controlled text generation based on pre-trained models. Software Guide, 2024, 23 (4): 199- 207.
|