作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2022, Vol. 48 ›› Issue (7): 82-88. doi: 10.19678/j.issn.1000-3428.0062113

• 人工智能与模式识别 • 上一篇    下一篇

融合实体类型信息的本体-实例联合学习方法

游乐圻, 裴忠民, 罗章凯   

  1. 航天工程大学 复杂电子系统仿真重点实验室, 北京 101416
  • 收稿日期:2021-07-19 修回日期:2021-09-07 出版日期:2022-07-15 发布日期:2021-09-10
  • 作者简介:游乐圻(1997—),男,硕士研究生,主研方向为人工智能、系统科学;裴忠民(通信作者),副研究员、博士;罗章凯,助理研究员、博士。
  • 基金资助:
    国防科技重点实验室基础研究项目(DXZT-JC-ZZ-2018-007)。

Ontology-Instances Joint Learning Method Fusing Entity Type Information

YOU Leqi, PEI Zhongmin, LUO Zhangkai   

  1. Key Laboratory of Complex Electronic System Simulation, Space Engineering University, Beijing 101416, China
  • Received:2021-07-19 Revised:2021-09-07 Online:2022-07-15 Published:2021-09-10

摘要: 对表示知识图谱的本体图和实例图进行联合学习能够提高嵌入学习效率,但不能区别表示实体在不同场景下的不同意义。在嵌入时考虑三元组中实体的关系类型特征,提出一种融合实体类型信息的本体-实例联合学习方法JOIE-TKRL-CT,达到在联合学习中表示多义实体、提高知识图谱嵌入学习效率的目的。在视图内部关系表示上,利用实体分层类型模型融入实体类型信息,在两个独立的嵌入空间中分别表征学习;在视图间关系表示上,将表征在两个独立空间的本体和实例通过非线性映射的方法跨视图链接。基于YAGO26K-906和DB111K-174数据集的实验结果表明,JOIE-TKRL-CT能够准确捕获知识图谱的实体类型信息,提高联合学习模型性能,与TransE、HolE、DisMult等基线模型相比,其在实例三元组补全和实体分类任务上均获得最优性能,具有较好的知识学习效果。

关键词: 联合学习, 实体多义性, 跨视图转换, 分层类型模型, 三元组补全, 实体分类

Abstract: The joint learning of the ontology graph and the instance graph that represent the knowledge graph can effectively improve the efficiency of embedded learning, but it cannot represent the ambiguity of entities in different scenarios.In this regard, this paper integrates the relationship type characteristics of the entities in the triples when embedding, and proposes an ontology-instances joint learning mothed, JOIE-TKRL-CT, that fuses entity type information to achieve the goals of representing polysemous entities in joint learning and improving the efficiency of knowledge map embedded learning.This method expresses the relationship within the view, integrates the entity type information based on the entity hierarchical type model, and characterizes learning in two independent embedding spaces.Meanwhile, it expresses the relationship between the views, passes the ontology and instance represented in the two independent spaces, and uses non-linear mapping methods for cross-view linking.Multi-group comparison experiments are performed on the YAGO26K-906 and DB111K-174 data sets, and the results show that JOIE-TKRL-CT can accurately capture the hierarchical type information of the knowledge graph and improve the performance of the joint learning model.Compared with baseline models such as TransE, HolE, and DisMult, this model achieves the best performance in instance triple complement and entity classification tasks, and has a better knowledge learning effect.

Key words: joint learning, entity polysemy, cross-view transformation, hierarchical type model, triple completion, entity classification

中图分类号: