作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

融合注意力的双通道联合编码关系抽取模型

  • 发布日期:2026-04-02

Dual-channel joint encoding relation extraction model with attention fusion

  • Published:2026-04-02

摘要: 针对现有关系三元组抽取方法在复杂语境下存在多关系语义表示不足以及隐式关系难以有效抽取的问题,提出了一种融合注意力机制的双通道联合编码关系抽取模型AMJERE(Attention-Mechanism Joint Encoding for Relation Extraction)。该模型从句子语义与关系语义协同建模的角度出发,构建句子通道与关系通道相互独立且可交互的联合编码框架,以增强关系语义表示的完整性与判别能力。模型首先采用句子—关系双通道独立编码结构,分别对输入句子和候选关系进行编码表示,避免语义空间混叠带来的信息干扰;随后引入关系融合模块,通过自注意力机制对关系语义进行建模,并结合句子上下文信息提升对隐式关系特征的表达能力;在此基础上,设计跨通道交叉注意力机制,实现句子表示与关系表示之间的深层语义交互,从而捕捉实体与关系之间的潜在依赖关系,并获得紧凑的联合语义表示;最后,通过多个线性分类器完成关系判别与实体标签预测,实现关系三元组的联合抽取。在NYT和WebNLG两个公开数据集上的实验结果表明,AMJERE在精确率、召回率和F1值指标上均优于多种主流基线模型,其中F1值分别达到93.3%和93.5%。消融实验与定性分析进一步验证了所提出方法在多关系语义表示与隐式关系抽取任务中的有效性与鲁棒性。

Abstract: To address the limitations of existing relation triplet extraction methods in complex contexts, including insufficient multi-relation semantic representation and difficulty in extracting implicit relations, this paper proposes a dual-channel joint encoding model with attention mechanisms, termed AMJERE (Attention-Mechanism Joint Encoding for Relation Extraction). The model constructs independent yet interactive sentence and relation encoding channels to enhance the completeness and discriminative ability of relation semantic representations. Specifically, AMJERE employs a sentence–relation dual-channel independent encoding architecture to separately represent input sentences and candidate relations, reducing semantic interference. A relationship fusion module based on self-attention is introduced to enhance implicit relation modeling by incorporating sentence contextual information. Furthermore, a cross-channel attention mechanism enables deep semantic interaction between sentence and relation representations, capturing latent dependencies between entities and relations and producing compact joint representations. Finally, multiple linear classifiers are used to perform relation prediction and entity label identification, achieving joint extraction of relation triplets. Experimental results on the NYT and WebNLG datasets demonstrate that AMJERE outperforms several baseline models in terms of precision, recall, and F1 score, achieving F1 values of 93.3% and 93.5%, respectively. Ablation studies and qualitative analyses further verify the effectiveness of the proposed model.