[1] BISWAS S S. Potential Use of Chat GPT in Global
Warming[J]. Annals of Biomedical Engineering, 2023,
51(6): 1126-1127.
[2] PHUNG T, PADUREAN V A, CAMBRONERO J,
et al. Generative AI for Programming Education:
Benchmarking ChatGPT, GPT-4, and Human
Tutors[C]//Proceedings of the ACM Conference on
International Computing Education Research, 2023, 2:
41-42.
[3] 严昊,刘禹良,金连文,等.类ChatGPT大模型发展、
应 用 和 前 景 [J]. 中 国 图 象 图 形 学
报,2023,28(09):2749-2762.
YAN H, LIU Y L, JIN L W, et al. The Development,
Application, and Future of LLM Similar to
ChatGPT[J]. Journal of Image and Graphics, 2023,
28(09): 2749-2762.
[4]吴砥,李环,陈旭.人工智能通用大模型教育应用影
响探析[J].开放教育研究,2023,29(02):19-25.
WU D, LI H, CHEN X. Analysis on the Influence of
Artificial Intelligence Generic Large Model on
Education Application[J]. Open Education Research,
2023, 29(02): 19-25.
[5] OUYANG L, WU J, JIANG X, et al. Training
Language Models to Follow Instructions with Human
Feedback[C]//Advances in Neural Information
Processing Systems, 2022, 35: 27730-27744.
[6] TOUVRON H, LAVRIL T, IZACARD G, et al.
LLaMA: Open and Efficient Foundation Language
Models[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2302.13971.
[7] LIU P, YUAN W, FU J, et al. Pre-train, Prompt,
and Predict: A Systematic Survey of Prompting
Methods in Natural Language Processing[J]. ACM
Computing Surveys, 2023, 55(9): 1-35.
[8] DU Z, QIAN Y, LIU X, et al. GLM: General
Language Model Pretraining with Autoregressive
Blank Infilling[C]//Proceedings of the 60th Annual
Meeting of the Association for Computational
Linguistics, 2022, 1:320-335.
[9] TAORI R, GULRAJANI I, ZHANG T, et al.
Alpaca: A Strong, Replicable Instruction-Following
Model[J]. Stanford Center for Research on Foundation
Models, 2023, 3(6): 7.
[10] TIAN C W, ZHANG Y N, ZUO W M, et al. A
Heterogeneous Group CNN for Image
Super-resolution[J]. IEEE Transactions on Neural
Networks and Learning Systems, 2022.
[11] XIONG H, WANG S, ZHU Y, et al. Doctorglm:
Fine-tuning your Chinese Doctor is not a Herculean
Task[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2304.01097.
[12] LIU Z, ZHONG T, LI Y, et al. Evaluating Large
Language Models for Radiology Natural Language
Processing[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2307.13693.
[13] YANG H, LIU X Y, WANG C D. FinGPT:
Open-Source Financial Large Language
Models[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2306.06031.
[14] FRANTAR E, ASHKBOOS S, HOEFLER T, et al.
Gptq: Accurate Post-Training Quantization for
Generative Pre-trained Transformers[EB/OL].
[2023-10-18]. https://arxiv.org/abs/2210.17323.
[15] TIAN C W, YUAN Y X, ZHANG S C, et al.
Image Super-resolution with An Enhanced Group
Convolutional Neural Network[J]. Neural networks,
2022, 153: 373-385. [16] ZAFRIR O, BOUDOUKH G, IZSAK P, et al.
Q8BERT: Quantized 8Bit BERT[C]//Advances in
Neural Information Processing Systems Workshop,
2019: 36-39.
[17] 刘金硕,刘宁.面向招标文件的半结构化文本自
动生成[J]. 计算机工程,2023,49(3):67-72.
LIU J S, LIU N. Automatic Generation of
Semi-Structured Texts for Bidding Documents[J].
Computer Engineering, 2023, 49(3): 67-72.
[18] 李健智,王红玲,王中卿.基于场景与对话结构的
摘要生成研究[J].计算机工程,2023,49(4):303-311.
LI J Z, WANG H L, WANG Z Q. Research on
Summarization Generation Based on Scene and
Dialogue Structure[J]. Computer Engineering, 2023,
49(4): 303-311.
[19] 高玮军,刘健,毛文静.基于 T-HDGN 模型的对话
摘要生成方法[J].计算机工程,2023,49(10):80-88.
GAO W J, LIU J, MAO W J. Dialogue Summary
Generation Method Based on T-HDGN Model[J].
Computer Engineering, 2023, 49(10): 80-88.
[20] 杨涛,解庆,刘永坚,等.主题感知的长文本自动摘
要算法[J].计算机工程与应用,2022,58(20):165-173.
YANG T, XIE Q, LIU Y J, et al. Research on
Topic-Aware Long Text Summarization Algorithm[J].
Computer Engineering and Applications, 2022, 58(20):
165-173.
[21] ZHAO Z, WANG H. Maskgec: Improving Neural
Grammatical Error Correction via Dynamic
Masking[C]//Proceedings of the AAAI Conference on
Artificial Intelligence. 2020, 34(01): 1226-1233.
[22] WANG J D, LAN C L, LIU C, et al. Generalizing
to Unseen Domains: A Survey on Domain
Generalization[J]. IEEE Transactions on Knowledge
and Data Engineering, 2022,35,8052-8072.
[23] RAFFEL C, SHAZEER N, ROBERTS A, et al.
Exploring the Limits of Transfer Learning with a
Unified Text-to-Text Transformer[J]. The Journal of
Machine Learning Research, 2020, 21(1): 5485-5551.
[24] LIU X, ZHENG Y, DU Z, et al. GPT Understands,
Too[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2103.10385.
[25] LI X L, LIANG P. Prefix-Tuning: Optimizing
Continuous Prompts for Generation[EB/OL].
[2023-10-18]. https://arxiv.org/abs/2101.00190.
[26] HU B, CHEN Q, ZHU F. Lcsts: A Large Scale
Chinese Short Text Summarization Dataset[J].
Proceedings of the Conference on Empirical Methods
in Natural Language Processing, 2015: 1967-1972.
[27] HUA L, WAN X, LI L. Overview of the NLPCC
2017 Shared Task: Single Document
Summarization[C]//Proceedings of the Natural
Language Processing and Chinese Computing, 2017:
942-947.
[28] CHEN L, LIU Y, An S Y, et al. Unsupervised
Extractive Summarization with Heterogeneous Graph
Embeddings for Chinese Document[C]//Proceedings of
the IEEE International Conference on Acoustics,
Speech and Signal Processing, 2023: 1-5.
[29] Lin C Y. ROUGE: A Package for Automatic
Evaluation of Summaries[C]//Proceedings of the
Association for Computational Linguistics, 2004:
74-81.
[30] LIU X, JI K, FU Y, et al. P-Tuning v2: Prompt
Tuning Can Be Comparable to Fine-tuning Universally
Across Scales and Tasks[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2110.07602.
[31] HU E J, SHEN Y, WALLIS P, et al. LoRA:
Low-Rank Adaptation of Large Language
Models[EB/OL]. [2023-10-18].
https://arxiv.org/abs/2106.09685.
[32] 张克君,李伟男,钱榕,等.基于深度学习的文本自
动摘要方案[J].计算机应用,2019,39(2):311-315.
ZHANG K J, LI W N, QIAN R, et al. Automatic Text
Summarization Scheme Based on Deep Learning[J].
Journal of Computer Applications, 2019, 39(2):
311–315. [33] GEHRMANN S, DENG Y, RUSH A M.
Bottom-Up Abstractive Summarization[EB/OL].
[2023-10-18]. https://arxiv.org/abs/1808.10792
[34] MA S, SUN X, XU J, et al. Improving Semantic
Relevance for Sequence-to-Sequence Learning of
Chinese Social Media Text Summarization[EB/OL].
[2023-10-18]. https://arxiv.org/abs/1706.02459.
[35] JI Z, LEE N, FRIESKE R, et al. Survey of
Hallucination in Natural Language Generation[J].
ACM Computing Surveys, 2023, 55(12): 1-38.
[36] Narayan S, COHEN S B, LAPATA M. Don't Give
Me the Details, Just the Summary! Topic-Aware
Convolutional Neural Networks for Extreme
Summarization[EB/OL]. [2023-10-18].
https://arxiv.org/abs/1808.08745
[37] SUN G, WANG Z, ZHAO J. Automatic Text
Summarization Using Deep Reinforcement Learning
and Beyond[J]. Information Technology and Control,
2021, 50(3): 458-469.
[38] Lin C, Liu Y, An S, et al. Unsupervised Extractive
Summarization With Heterogeneous Graph
Embeddings for Chinese Documents[C]//Proceedings
of the ICASSP Conference on Artificial Intelligence.
2023, 1-5.
[39] ZHENG H, LAPATA M. Sentence Centrality
Revisited for Unsupervised
Summarization[C]//Proceedings of the 57th Annual
Meeting of the Association for Computational
Linguistics, 2019: 6236–6247.
[40] JIANG X, HU P, HOU L, et al. Improving
Pointer-Generator Network with Keywords
Information for Chinese Abstractive
Summarization[C]//Proceedings of the Natural
Language Processing and Chinese Computing, 2018:
464-474.
[41] ZHAO J, CHUNG T L, XU B, et al. Summary++:
Summarizing Chinese News Articles with
Attention[C]//Proceedings of the Natural Language
Processing and Chinese Computing, 2018: 27-37.
[42] SHI Y, MENG J, WANG J, et al. A Normalized
Encoder-Decoder Model for Abstractive
Summarization Using Focal Loss[C]//Proceedings of
the Natural Language Processing and Chinese
Computing, 2018: 383-392.
|