[1]Zhou A, Yan K, Shlapentokh-Rothman M, et al. Language agent tree search unifies reasoning acting and planning in language models[J]. arXiv preprint arXiv:2310.04406, 2023.
[2]Tang J, Yang Y, Wei W, et al. Higpt: Heterogeneous graph language model[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 2842-2853.
[3]Chen, Ruirui, et al. "Is a Large Language Model a Good Annotator for Event Extraction?." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 16. 2024.
[4]Yao Y. An outline of a theory of three-way decisions[C]//International conference on rough sets and current trends in computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012: 1-17.
[5]Zhan J, Ye J, Ding W, et al. A novel three-way decision model based on utility theory in incomplete fuzzy decision systems[J]. IEEE Transactions on Fuzzy Systems, 2021, 30(7): 2210-2226.
[6]Achiam J, Adler S, Agarwal S, et al. Gpt-4 technical report[J]. arXiv preprint arXiv:2303.08774, 2023.
[7]Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation and fine-tuned chat models[J]. arXiv preprint arXiv:2307.09288, 2023.
[8]Song, Feifan, et al. "Preference ranking optimization for human alignment." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024.
[9]Fang Y, Fan D, Zha D, et al. Gaugllm: Improving graph contrastive learning for text-attributed graphs with large language models[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 747-758.
[10]Hinder F, Vaquet V, Hammer B. One or two things we know about concept drift—a survey on monitoring in evolving environments. Part A: detecting concept drift[J]. Frontiers in Artificial Intelligence, 2024, 7: 1330257.
[11]Dou S, Zhou E, Liu Y, et al. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment[J]. arXiv preprint arXiv:2312.09979, 2023, 4(7).
[12]Sylolypavan, A., Sleeman, D., Wu, H. et al. The impact of inconsistent human annotations on AI driven clinical decision making. npj Digit. Med. 6, 26 (2023).
[13]Yao Y. Tri-level thinking: models of three-way decision[J]. International Journal of Machine Learning and Cybernetics, 2020, 11(5): 947-959.
[14]Ahmed, Toufique, et al. "Can LLMs Replace Manual Annotation of Software Engineering Artifacts?." arXiv preprint arXiv:2408.05534 (2024).
[15]Fedus W, Zoph B, Shazeer N. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity[J]. Journal of Machine Learning Research, 2022, 23(120): 1-39.
[16]Lepikhin D, Lee H J, Xu Y, et al. Gshard: Scaling giant models with conditional computation and automatic sharding[J]. arXiv preprint arXiv:2006.16668, 2020.
[17]Li, Minzhi, et al. "Coannotating: Uncertainty-guided work allocation between human and large language models for data annotation." arXiv preprint arXiv:2310.15638 (2023).
[18]Zhou Y, Lei T, Liu H, et al. Mixture-oFexperts with expert choice routing[J]. Advances in Neural Information Processing Systems, 2022, 35: 7103-7114.
[19]Sadana U, Chenreddy A, Delage E, et al. A survey of contextual optimization methods for decision-making under uncertainty[J]. European Journal of Operational Research, 2024.
[20]Zhan J, Wang J, Ding W, et al. Three-way behavioral decision making with hesitant fuzzy information systems: survey and challenges[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 10(2): 330-350.
[21]Ding J, Zhang C, Li D, et al. Three-way decisions in generalized intuitionistic fuzzy environments: survey and challenges[J]. Artificial Intelligence Review, 2024, 57(2): 38.
[22]Tang, M. (2023). 一种人机混合智能决策系统及方法 [A human-machine hybrid intelligent decision system and method] (CN Patent No. 116578916A). State Intellectual Property Office of China.[专利]
[23]Gu P, Liu J, Zhou X. Approaches to three-way decisions based on the evaluation of probabilistic linguistic terms sets[J]. Symmetry, 2021, 13(5): 764.
[24]Zhang Q, Pang G, Wang G. A novel sequential three-way decisions model based on penalty function[J]. Knowledge-Based Systems, 2020, 192: 105350.
[25]Cai W, Jiang J, Wang F, et al. A survey on mixture of experts[J]. arXiv preprint arXiv:2407.06204, 2024.
[26]Chakraborty S. TOPSIS and Modified TOPSIS: A comparative analysis[J]. Decision Analytics Journal, 2022, 2: 100021.
[27]Yu D, Kou G, Xu Z, et al. Analysis of collaboration evolution in AHP research: 1982–2018[J]. International Journal of Information Technology & Decision Making, 2021, 20(01): 7-36.
[28]Chicco D, Warrens M J, Jurman G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation[J]. Peerj computer science, 2021, 7: e623.
[29]Zhang C, Ding J, Zhan J, et al. Fuzzy intelligence learning based on bounded rationality in IoMT systems: a case study in Parkinson’s disease[J]. IEEE Transactions on Computational Social Systems, 2022, 10(4): 1607-1621.
[30]Gupta S, Modgil S, Bhattacharyya S, et al. Artificial intelligence for decision support systems in the field of operations research: review and future scope of research[J]. Annals of Operations Research, 2022, 308(1): 215-274.
[31]Yang Y, Yi F, Deng C, et al. Performance analysis of the CHAID algorithm for accuracy[J]. Mathematics, 2023, 11(11): 2558.
[32]Karkour Y, Tajani C, Khattabi I. Generating a set of consistent pairwise comparison test matrices in AHP using particle swarm optimization[J]. WSEAS Trans. Math, 2024, 23: 206-215.
[33]罗文,王厚峰.大语言模型评测综述[J].中文信息学报,2024,第38卷(1): 1-23. Luo, W., & Wang, H. F. (2024). A survey of large language model evaluation. Journal of Chinese Information Processing, *38*(1), 1-23.
[34]厉子凡,王浩,方宝富.一种基于多步竞争网络的多智能体协作方法[J].计算机工程,2022,48(05):74-81.DOI:10.19678/j.issn.1000-3428.0061437. Li, Z. F., Wang, H., & Fang, B. F. (2022). A multi-agent collaboration method based on multi-step competition network. Computer Engineering, *48*(5), 74-81. https://doi.org/10.19678/j.issn.1000-3428.0061437.
[35]李敬灿,肖萃林,覃晓婷,等.基于大语言模型与语义增强的文本关系抽取算法[J].计算机工程,2024,50(04):87-94.DOI:10.19678/j.issn.1000-3428.0068501. Li, J. C., Xiao, C. L., Qin, X. T., et al. (2024). A text relation extraction algorithm based on large language model and semantic enhancement. Computer Engineering, *50*(4), 87-94. https://doi.org/10.19678/j.issn.1000-3428.0068501.
|