1 |
AHN D. The stages of event extraction[C]//Proceedings of the Workshop on Annotating and Reasoning About Time and Events. New York, USA: ACM Press, 2006: 1-8.
|
2 |
|
3 |
CHEN Y B, LIU S L, ZHANG X, et al. Automatically labeled data generation for large scale event extraction[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. [S. l. ]: Association for Computational Linguistics, 2017: 409-419.
|
4 |
|
5 |
YANG H, CHEN Y B, LIU K, et al. DCFEE: a document-level Chinese financial event extraction system based on automatically labeled training data[EB/OL]. [2022-10-05]. https://aclanthology.org/P18-4009.pdf.
|
6 |
陈斌, 周勇, 刘兵. 基于卷积双向长短期记忆网络的事件触发词抽取. 计算机工程, 2019, 45 (1): 153- 158.
URL
|
|
CHEN B, ZHOU Y, LIU B. Event trigger word extraction based on convolutional bidirectional long short term memory network. Computer Engineering, 2019, 45 (1): 153- 158.
URL
|
7 |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2017: 6000-6010.
|
8 |
ZHENG S, CAO W, XU W, et al. Doc2EDAG: an end-to-end document-level framework for Chinese financial event extraction[C]//Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2019: 337-346.
|
9 |
仲伟峰, 杨航, 陈玉博, 等. 基于联合标注和全局推理的篇章级事件抽取. 中文信息学报, 2019, 33 (9): 88-95, 106.
URL
|
|
ZHONG W F, YANG H, CHEN Y B, et al. Document-level event extraction based on joint labeling and global reasoning. Journal of Chinese Information Processing, 2019, 33 (9): 88-95, 106.
URL
|
10 |
EBNER S, XIA P, CULKIN R, et al. Multi-sentence argument linking[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [S. l. ]: Association for Computational Linguistics, 2020: 8057-8077.
|
11 |
|
12 |
LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[EB/OL]. [2022-10-05]. https://arxiv.org/pdf/2107.13586.pdf.
|
13 |
LEVY O, SEO M, CHOI E, et al. Zero-shot relation extraction via reading comprehension[C]//Proceedings of the 21st Conference on Computational Natural Language Learning. [S. l. ]: Association for Computational Linguistics, 2017: 333-342.
|
14 |
PETRONI F, ROCKTÄSCHEL T, RIEDEL S, et al. Language models as knowledge bases?[C]//Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2019: 2463-2473.
|
15 |
SHIN T, RAZEGHI Y, LOGAN R L, et al. AutoPrompt: eliciting knowledge from language models with automatically generated prompts[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2020: 4222-4235.
|
16 |
LI X Y, FENG J R, MENG Y X, et al. A unified MRC framework for named entity recognition[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [S. l. ]: Association for Computational Linguistics, 2020: 5849-5859.
|
17 |
DU X Y, CARDIE C. Event extraction by answering (almost) natural questions[C]//Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2020: 671-683.
|
18 |
李珂, 陈彦如, 郑文蛟, 等. 基于机器阅读理解的新闻时间线挖掘与展示. 情报理论与实践, 2022, 45 (4): 184- 189.
URL
|
|
LI K, CHEN Y R, ZHENG W J, et al. News timeline mining and presentation based on machine reading comprehension. Information Studies (Theory & Application), 2022, 45 (4): 184- 189.
URL
|
19 |
LIU J, CHEN Y F, XU J N. Machine reading comprehension as data augmentation: a case study on implicit event argument extraction[C]//Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2021: 2716-2725.
|
20 |
LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. [S. l. ]: Association for Computational Linguistics, 2020: 7871-7880.
|
21 |
|
22 |
庄福振, 罗平, 何清, 等. 迁移学习研究进展. 软件学报, 2015, 26 (1): 26- 39.
URL
|
|
ZHUANG F Z, LUO P, HE Q, et al. Survey on transfer learning research. Journal of Software, 2015, 26 (1): 26- 39.
URL
|
23 |
|
24 |
XU R X, LIU T Y, LI L, et al. Document-level event extraction via heterogeneous graph-based interaction model with a tracker[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. [S. l. ]: Association for Computational Linguistics, 2021: 3533-3546.
|
25 |
ZHU T, QU X, CHEN W, et al. Efficient document-level event extraction via pseudo-trigger-aware pruned complete graph[EB/OL]. [2022-10-05]. https://arxiv.org/abs/2112.06013.
|