[1] 朱永清, 赵鹏, 赵菲菲, 等.基于深度学习的生成式文本摘要技术综述[J].计算机工程, 2021, 47(11):11-21, 28. ZHU Y Q, ZHAO P, ZHAO F F, et al.Survey on abstractive text summarization technologies based on deep learning[J].Computer Engineering, 2021, 47(11):11-21, 28.(in Chinese) [2] 李金鹏, 张闯, 陈小军, 等.自动文本摘要研究综述[J].计算机研究与发展, 2021, 58(1):1-21. LI J P, ZHANG C, CHEN X J, et al.Survey on automatic text summarization[J].Journal of Computer Research and Development, 2021, 58(1):1-21.(in Chinese) [3] RUSH A M, CHOPRA S, WESTON J.A neural attention model for abstractive sentence summarization[C]//Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing.Philadelphia, USA:Association for Computational Linguistics, 2015:379-389. [4] SEE A, LIU P J, MANNING C D.Get to the point:summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.Philadelphia, USA:Association for Computational Linguistics, 2017:1073-1083. [5] JIN H, WANG T, WAN X.Semsum:semantic dependency guided neural abstractive summarization[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto, USA:AAAI Press, 2020:8026-8033. [6] XIAO L, WANG L, HE H, et al.Copy or rewrite:hybrid summarization with hierarchical reinforcement learning[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto, USA:AAAI Press, 2020:9306-9313. [7] STEINBERGER J, KABADJOV M, POESIO M, et al.Improving LSA-based summarization with anaphora resolution[C]//Proceedings of Conference on Human Language Technology and Empirical Methods in Natural Language Processing.Philadelphia, USA:Association for Computational Linguistics, 2005:1-8. [8] KAR M, NUNES S, RIBEIRO C.Summarization of changes in dynamic text collections using Latent Dirichlet Allocation model[J].Information Processing & Management, 2015, 51(6):809-833. [9] MIHALCEA R, TARAU P.TextRank:bringing order into text[C]//Proceedings of 2004 Conference on Empirical Methods in Natural Language Processing.Philadelphia, USA:Association for Computational Linguistics, 2004:404-411. [10] CALDARELLI G, CRISTELLI M, GABRIELLI A, et al.Ranking and clustering countries and their products;a network analysis[EB/OL].[2022-02-11].https://arxiv.org/abs/1108.2590. [11] ERKAN G, RADEV D R.LexRank:graph-based lexical centrality as salience in text summarization[EB/OL].[2022-02-11].https://arxiv.org/abs/1109.2128. [12] PARVEEN D, STRUBE M.Integrating importance, non-redundancy and coherence in graph-based extractive summarization[C]//Proceedings of the 24th International Conference on Artificial Intelligence.Palo Alto, USA:AAAI Press, 2015:1298-1304. [13] YASUNAGA M, ZHANG R, MEELU K, et al.Graph-based neural multi-document summarization[C]//Proceedings of the 21st Conference on Computational Natural Language Learning.Philadelphia, USA:Association for Computational Linguistics, 2017:452-462. [14] ZHENG H, LAPATA M.Sentence centrality revisited for unsupervised summarization[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Philadelphia, USA:Association for Computational Linguistics, 2019:6236-6247. [15] 徐馨韬, 柴小丽, 谢彬, 等.基于改进TextRank算法的中文文本摘要提取[J].计算机工程, 2019, 45(3):273-277. XU X T, CHAI X L, XIE B, et al.Extraction of Chinese text summarization based on improved TextRank algorithm[J].Computer Engineering, 2019, 45(3):273-277.(in Chinese) [16] WANG D, LIU P, ZHENG Y, et al.Heterogeneous graph neural networks for extractive document summarization[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.Philadelphia, USA:Association for Computational Linguistics, 2020:6209-6219. [17] ZHANG N, DENG S, LI J, et al.Summarizing Chinese medical answer with graph convolution networks and question-focused dual attention[C]//Proceedings of EMNLP'20.Philadelphia, USA:Association for Computational Linguistics, 2020:15-24. [18] VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al.Graph attention networks[EB/OL].[2022-02-11].https://arxiv.org/abs/1710.10903. [19] LIU Y, LAPATA M.Text summarization with pretrained encoders[EB/OL].[2022-02-11].https://arxiv.org/abs/1908.08345. [20] DEVLIN J, CHANG M W, LEE K, et al.BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1(Long and Short Papers).Philadelphia, USA:Association for Computational Linguistics, 2019:4171-4186. [21] LI B H, ZHOU H, HE J X, et al.On the sentence embeddings from pre-trained language models[EB/OL].[2022-02-11].https://arxiv.org/abs/2011.05864. [22] SU J L, CAO J R, LIU W J, et al.Whitening sentence representations for better semantics and faster retrieval[EB/OL].[2022-02-11].https://arxiv.org/abs/2103.15316. [23] BLEI D M, NG A Y, JORDAN M I.Latent Dirichlet allocation[J].Journal of Machine Learning Research, 2003, 3:993-1022. [24] VASWANI A, SHAZEER N, PARMAR N, et al.Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.New York, USA:ACM Press, 2017:6000-6010. [25] CARBONELL J, GOLDSTEIN J.The use of MMR, diversity-based reranking for reordering documents and producing summaries[C]//Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.New York, USA:ACM Press, 1998:335-336. [26] KAZEMI A, PÉREZ-ROSAS V, MIHALCEA R.Biased TextRank:unsupervised graph-based content extraction[C]//Proceedings of the 28th International Conference on Computational Linguistics.Barcelona, Spain:International Committee on Computational Linguistics, 2020:1642-1652. [27] SULTANA M, CHAKRABORTY P, CHOUDHURY T.Bengali abstractive news summarization using Seq2Seq learning with attention[M].Berlin, Germany:Springer, 2022. [28] MA C B, ZHANG W E, GUO M Y, et al.Multi-document summarization via deep learning techniques:a survey[EB/OL].[2022-02-11].https://arxiv.org/abs/2011.04843. |