| 1 | DUAN X Y, YIN M M, ZHANG M, et al. Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2019: 3162-3172. | 
																													
																							| 2 | ZHU J N, WANG Q A, WANG Y N, et al. NCLS: neural cross-lingual summarization[C]//Proceedings of Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2019: 3054-3064. | 
																													
																							| 3 | ZHANG B L, NAGESH A, KNIGHT K. Parallel corpus filtering via pre-trained language models[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2020: 8545-8554. | 
																													
																							| 4 |  | 
																													
																							| 5 |  | 
																													
																							| 6 | 赖华, 高玉梦, 黄于欣, 等. 基于多粒度特征的文本生成评价方法. 中文信息学报, 2022, 36(3): 45-53, 63. | 
																													
																							|  | LAI H, GAO Y M, HUANG Y X, et al. Evaluation method of text generation based on multi-granularity features. Journal of Chinese Information Processing, 2022, 36(3): 45-53, 63. | 
																													
																							| 7 | DOU Z Y, KUMAR S, TSVETKOV Y. A deep reinforced model for zero-shot cross-lingual summarization with bilingual semantic similarity rewards[C]//Proceedings of the 14th Workshop on Neural Generation and Translation. Stroudsburg, USA: Association for Computational Linguistics, 2020: 60-68. | 
																													
																							| 8 | LEUSKI A, LIN C Y, ZHOU L A, et al. Cross-lingual C*ST*RD. ACM Transactions on Asian Language Information Processing, 2003, 2(3): 245- 269. | 
																													
																							| 9 |  | 
																													
																							| 10 | ORǍSAN C, CHIOREAN O A. Evaluation of a cross-lingual Romanian-English multi-document summariser[C]//Proceedings of the 6th International Conference on Language Resources and Evaluation. Stroudsburg, USA: Association for Computational Linguistics, 2008: 1-10. | 
																													
																							| 11 | AYANA, SHEN S Q, CHEN Y, et al. Zero-shot cross-lingual neural headline generation. ACM Transactions on Audio, Speech, and Language Processing, 2018, 26(12): 2319- 2327. | 
																													
																							| 12 | ZHU J N, ZHOU Y, ZHANG J J, et al. Attend, translate and summarize: an efficient method for neural cross-lingual summarization[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2020: 1-10. | 
																													
																							| 13 | CAO Y E, LIU H, WAN X J. Jointly learning to align and summarize for neural cross-lingual summarization[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2020: 6220-6231. | 
																													
																							| 14 | BAI Y, GAO Y, HUANG H Y. Cross-lingual abstractive summarization with limited parallel resources[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2021: 6910-6924. | 
																													
																							| 15 |  | 
																													
																							| 16 | YOU Y J, JIA W J, LIU T Y, et al. Improving abstractive document summarization with salient information modeling[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: Association for Computational Linguistics, 2019: 2132-2137. | 
																													
																							| 17 |  | 
																													
																							| 18 |  | 
																													
																							| 19 | YOON WONJIN, YEO Y S, JEONG M, et al. Learning by semantic similarity makes abstractive summarization better[EB/OL]. [2023-02-18]. https://arxiv.org/abs/2002 . 07767. | 
																													
																							| 20 | HU B T, CHEN Q C, ZHU F Z. LCSTS: a large scale Chinese short text summarization dataset[C]//Proceedings of Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2015: 1967-1972. | 
																													
																							| 21 | 赵红梅, 刘群. 机器翻译常见错误类型总结[C]//第十届全国机器翻译研讨会. 中国, 重庆: 中国翻译协会, 2013: 1-10. | 
																													
																							|  | ZHAO H M, LIU Q. Summary of common error types in machine translation[C]//Proceedings of the 10th National Machine Translation Sym-posium. Chongqing, China: Translation Association of China, 2013: 1-10. (in Chinese) | 
																													
																							| 22 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York, USA: ACM Press, 2017: 5998-6008. | 
																													
																							| 23 | DYER C, CHAHUNEAU V, SMITH N A. A simple, fast, and effective reparameterization of IBM model 2 [C]//Proceedings of Conference on the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, USA: Association for Computational Linguistics, 2013: 644-648. | 
																													
																							| 24 | RENNIE S J, MARCHERET E, MROUEH Y, et al. Self-critical sequence training for image captioning[C]//Proceedings of Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2017: 7008-7024. | 
																													
																							| 25 | KANG X M, ZHAO Y, ZHANG J J, et al. Dynamic context selection for document-level neural machine translation via reinforcement learning[C]//Proceedings of Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2020: 2242-2254. | 
																													
																							| 26 | WU L J, TIAN F, QIN T, et al. A study of reinforcement learning for neural machine translation[C]//Proceedings of Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2018: 3612-3621. | 
																													
																							| 27 | JAUREGI UNANUE I, PARNELL J, PICCARDI M. BERTTune: fine-tuning neural machine translation with BERTScore[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2021: 1-10. |