Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering

   

DM-MoE: A Mixture of Experts Multi-Decision Method for Annotation-Based Debiasing

  

  • Published:2025-09-18

DM-MoE:基于标注去偏差的混合专家多决策方法

Abstract: In complex intelligent decision-making tasks, domain annotation bias can lead to degradation of the quality of model training data, which in turn affects the generalization ability and decision-making performance of the system. Such bias usually stems from two reasons: (1) the sparsity of expert labeling data due to the scarcity of relevant expert resources, which leads to the performance limitation of traditional supervised learning methods, and (2) the heterogeneity of expert knowledge due to the different tendencies of experts (including the differences in professional backgrounds, the diversity of risk preferences, etc.) triggering decision-making conflicts. Existing studies have not yet effectively solved the uncertainty problems caused by the expert labeling sparsity problem, the expert multi-tendency problem, and the expert knowledge fusion conflict. To this end, this paper proposes a multi-expert multi-perspective approach (Decision Making with MoE, DM-MoE) for the domain labeling bias problem, which integrates the method theory of Mixture of Expert strategy (MoE,Mixture of Experts) and uncertainty reasoning to construct a collaborative decision-making framework. The method constructs a multi-intelligent body system through prompt engineering, so that LLMs (including DeepSeek, GPT-4, and Literally One Mind) construct cross-domain multi-experts for different domains through prompt engineering, and dynamically generate decision annotations according to the real-time tendency changes of the experts. And the dynamic three-way decision-making mechanism is used to model the multi-propensity expert decision-making information. Finally, a two-stage optimization strategy is designed to specify the multi-criteria weights for the uncertainties in the decision to-be-determined domain by AHP hierarchical analysis based on LLMs, and combined with the TOPSIS method for the iterative multi-criteria optimization. Experiments show that DM-MoE has superior accuracy and stability compared with traditional decision-making methods.

摘要: 在复杂智能决策任务中,领域标注偏差能导致模型训练数据的质量下降,进而影响系统的泛化能力和决策性能。这种偏差通常源于以下两种原因:(1)相关专家资源稀缺导致的专家标注数据的稀疏性,这导致传统监督学习方法在性能受限,(2) 专家倾向不同导致专家知识异质性(包括专业背景差异、风险偏好多样性等)引发决策冲突。现有研究尚未有效解决专家标注稀疏问题、专家多倾向问题和专家知识融合冲突带来的不确定性问题。为此,本文提出针对领域标注偏差问题的多专家多视角方法(Decision Making with MoE,DM-MoE),融合了混合专家策略(MoE,Mixture of Experts)与不确定推理的方法理论,构建协同决策框架。该方法通过LLMs(包括DeepSeek、GPT-4、文心一言)基于提示工程针对不同领域构建跨领域多专家,根据专家实时的倾向变化动态地生成决策标注。并采用动态三支决策机制对多倾向多视角的专家决策信息进行建模。最后设计双阶段优化策略,对决策待定域中的不确定性通过基于LLMs的层次分析法指定多准则权重,并结合优劣解距离法进行多准则迭代优化。实验表明,DM-MoE相较于传统决策方法有更优秀的准确率和稳定性。