作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2024, Vol. 50 ›› Issue (1): 79-90. doi: 10.19678/j.issn.1000-3428.0066459

• 人工智能与模式识别 • 上一篇    下一篇

基于双重多视角表示的目标级隐性情感分类

崔蒙蒙1,*(), 刘井平1, 阮彤1, 宋雨秋1, 杜渂2   

  1. 1. 华东理工大学信息科学与工程学院, 上海 200237
    2. 迪爱斯信息技术股份有限公司, 上海 200032
  • 收稿日期:2022-12-07 出版日期:2024-01-15 发布日期:2023-04-12
  • 通讯作者: 崔蒙蒙
  • 基金资助:
    国家重点研发计划(2021YFC2701800); 国家重点研发计划(2021YFC2701801); 上海市促进产业高质量发展专项资金(2021-GZL-RGZN-01018)

Target-Level Implicit Sentiment Classification Based on Dual Multiview Representation

Mengmeng CUI1,*(), Jingping LIU1, Tong RUAN1, Yuqiu SONG1, Wen DU2   

  1. 1. School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
    2. DS Information Technology Co., Ltd., Shanghai 200032, China
  • Received:2022-12-07 Online:2024-01-15 Published:2023-04-12
  • Contact: Mengmeng CUI

摘要:

目标级隐性情感分类是自然语言处理中一项重要的情感分析任务。目前多数研究主要侧重于对上下文感知的目标进行建模,且建模信息源较为单一,难以充分捕获到目标词在文本中的隐性情感。针对该问题,提出基于双重多视角表示学习的目标级隐性情感分类方法,采用3种视角对目标和输入文本进行建模,分别设计文本自身的表示学习、图视角下的表示学习以及外部知识视角下的表示学习,并通过卷积神经网络将3种视角下的表示进行深度融合。此外,同时采用上述3种视角对目标进行表示学习,将文本的语义表示和目标的语义表示相结合,并输入到情感极性分类器中。在5个公共数据集上进行实验并与8个基线模型的对比结果表明,该方法性能达到了最优水平,在NewsMTSC-mt和NewsMTSC-rw隐性情感分析数据集上的$ \mathrm{F}{1}_{\mathrm{m}} $值分别比最好模型提高1.0%和2.6%,在Laptop14、Restaurant14和Twitter显性情感分析数据集上的$ \mathrm{F}{1}_{\mathrm{m}} $值分别比最好模型提高3.6%、1.4%和1.6%。

关键词: 目标级隐性情感分类, 自然语言处理, 情感分析, 双重多视角, 表示学习

Abstract:

Target-level implicit sentiment classification is a critical sentiment analysis task in natural language processing. Many existing studies mainly focused on modeling context-aware targets, and their modeling information source were relatively single, making it difficult to adequately capture the implicit sentiment of the target in the text. This study proposes a target-level sentiment classification method based on dual multiview representation learning that models the target and text from three information views. Specifically, this study designs a representation learning model from the text, the view of the graph, and the view of external knowledge and exploits a convolutional neural network to deeply integrate the representations of the three views. Moreover, the proposed method learns target-dependent representations from these views. Finally, the semantic representations of the text and the target are combined and fed into the sentiment classifier. The results of experiments conducted on five public datasets and comparative experiments with eight baseline models show that the solution achieves state-of-the-art performance. In particular, the F1m of the proposed model is 1.0% and 2.6% higher than those of previous best models on NewsMTSC-mt and NewsMTSC-rw implicit sentiment analysis datasets, respectively. In addition, the F1m of the proposed model is 3.6%, 1.4%, and 1.6% higher than those of the previous best models on Laptop14, Restaurant14, and Twitter explicit emotion analysis datasets, respectively.

Key words: target-level implicit sentiment classification, natural language processing, sentiment analysis, dual multiview, representation learning