[1]PATEL A P, FISHER J L, NICHOLS E, et al. Global, regional, and national burden of brain and other CNS cancer, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016[J]. Lancet Neurology, 2019, 18(4): 376-393.
[2]郭华源, 刘盼, 卢若谷, 等. 人工智能大模型医学应用研究[J]. 中国科学: 生命科学, 2024, 54(3): 482-506.
GUO Huayuan, LIU Pan, LU Ruogu, et al. Research on the Medical Applications of Large-scale Artificial Intelligence Models [J]. Science China Life Sciences, 2024, 54(3): 482–506.
[3]TIAN Yuanhe, GAN Ruyi, SONG Yan, et al. ChiMed-GPT: a Chinese medical large language model with full training regime and better alignment to human preferences[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers). Bangkok, Thailand: Association for Computational Linguistics, 2024: 7156-7173.
[4]CHEN Y, WANG Z, XING X, et al. Bianque: balancing the questioning and suggestion ability of health LLMs with multi-turn health conversations polished by ChatGPT[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2310.15896.
[5]万艳丽, 王颖帅, 赵姗姗. 医学大模型研究进展[J]. 医学研究杂志, 2024, 53(10): 1-6+186.
WAN Yanli, WANG Yingshuai, ZHAO Shanshan. Research Progress on Medical Large-scale Models [J]. Journal of Medical Research, 2024, 53(10): 1–6+186.
[6]CHRISTIANO P F, LEIKE J, BROWN T B, et al. Deep reinforcement learning from human preferences[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates, 2017: 4302-4310.
[7]OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback[J]. Advances in neural information processing systems, 2022, 35: 27730-27744.
[8]罗焕坤, 葛一烽, 刘帅, 等. 大语言模型在数学推理中的研究进展[J/OL]. 计算机工程: 1-23[2025-05-21]. https://doi.org/10.19678/j.issn.1000-3428.0069590.
LUO Huankun, GE Yifeng , LIU Shuai. Research Progress of Large Language Models in Mathematical Reasoning[J/OL]. Computer Engineering, 1-23[2025-05-21]. https://doi.org/10.19678/j.issn.1000-3428.0069590.
[9]李敬灿, 肖萃林, 覃晓婷, 谢夏. 基于大语言模型与语义增强的文本关系抽取算法[J]. 计算机工程, 2024, 50(4): 87-94.
LI Jingcan, XIAO Cuilin, QIN Xiaoting, XIE Xia. Text-Relation-Extraction Algorithm Based on Large-Language Model and Semantic Enhancement[J]. Computer Engineering, 2024, 50(4): 87-94.
[10]BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877-1901.
[11]ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2303.08774.
[12]Grattafiori A, Dubey A, Jauhri A, et al. The llama 3 herd of models[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2407.21783.
[13]CUI Y, YANG Z, YAO X. Efficient and effective text encoding for Chinese llama and alpaca[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2304.08177.
[14]ZHANG J, GAN R, WANG J, et al. Fengshenbang 1.0: being the foundation of Chinese cognitive intelligence[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2209.02970.
[15]张一帆, 张泽瑞, 董敬, 等. 大模型时代下的医疗人工智能技术进展与挑战[J]. 中国医学装备, 2024, 21(06): 189-194.
ZHANG Yifan, ZHANG Zerui, DONG Jing, et al. Advances and Challenges of Medical Artificial Intelligence in the Era of Large-scale Models [J]. China Medical Equipment, 2024, 21(06): 189–194.
[16]HAN T, ADAMS L C, PAPAIOANNOU J M, et al. MedAlpaca - an open-source collection of medical conversational AI models and training data[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2304.08247.
[17]SINGHAL K, TU T, GOTTWIES J, et al. Toward expert-level medical question answering with large language models[J]. Nature Medicine, 2025, 31: 943-950. https://doi.org/10.1038/s41591-024-03423-7.
[18]ZHANG Zhiyi, QINGYING Xiao, WAN Xiang, et al. HuatuoGPT, towards taming language model to be a doctor[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing(EMNLP). Singapore: Association for Computational Linguistics, 2023: 10859-10885.
[19]YUE Shengbin, LIU Shujun, ZHOU Yuxuan, et al. LawLLM: intelligent legal system with legal reasoning and verifiable retrieval[C]//Proceedings of the International Conference on Database Systems for Advanced Applications. Cham: Springer, 2024: 304-321.
[20]CHEN Wei, WANG Qiushi, LONG Zefei, et al. DISC-FinLLM: a Chinese financial large language model based on multiple experts fine-tuning[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2310.15205.
[21]DAN Yuhao, LEI Zhikai, GU Yiyang, et al. EduChat: a large-scale language model-based chatbot system for intelligent education[C]//Proceedings of the China Conference on Knowledge Graph and Semantic Computing (CCKS 2024). [S.l.]: [s.n.], 2024.
[22]LESTER B, AL-RFOU R, CONSTANT N. The power of scale for parameter-efficient prompt tuning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021: 3045-3059.
[23]HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[C]//Proceedings of the 10th International Conference on Learning Representations (ICLR 2022). [S.l.]: [s.n.], 2022: 3.
[24]SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[EB/OL]. [2025-05-21]. https://arxiv.org/abs/1707.06347.
[25]刘波, 束朋辉, 刘军旗. 磁共振弥散加权成像联合磁共振波谱成像技术诊断脑肿瘤的价值[J]. 中国医学工程, 2024, 32(8): 112-115.
LIU Bo, SHU Penghui, LIU Junqi. Diagnostic Value of Magnetic Resonance Diffusion-Weighted Imaging Combined with Magnetic Resonance Spectroscopy in Brain Tumors [J]. China Medical Engineering, 2024, 32(8): 112–115.
[26]YANG An, YANG Baosong,HUI Binyuan , et al. Qwen2 technical report[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2407.10671.
[27]YANG A, XIAO B, WANG B, et al. Baichuan 2: open large-scale language models[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2309.10305.
[28]TEAM GLM, ZENG A, XU B, et al. ChatGLM: a family of large language models from GLM-130B to GLM-4 all tools[EB/OL]. [2025-05-21]. https://arxiv.org/abs/2406.12793.
|