Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering ›› 2026, Vol. 52 ›› Issue (4): 82-89. doi: 10.19678/j.issn.1000-3428.0070178

• Computational Intelligence and Pattern Recognition • Previous Articles     Next Articles

Emotion Recognition Based on Adaptive Fusion of Multiple Gait Features

FU Bichao, SHENG Jie, WANG Lei*()   

  1. School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, Anhui, China
  • Received:2024-07-25 Revised:2024-09-20 Online:2026-04-15 Published:2024-12-05
  • Contact: WANG Lei

基于多步态特征自适应融合的情绪识别

付碧超, 盛捷, 王雷*()   

  1. 中国科学技术大学信息科学技术学院, 安徽 合肥 230026
  • 通讯作者: 王雷
  • 作者简介:

    付碧超, 男, 硕士研究生, 主研方向为情绪识别、因果推断

    盛捷, 讲师、博士

    王雷(通信作者), 副教授、博士

  • 基金资助:
    高技术创新特区项目(20-163-14-LZ-001-004-01)

Abstract:

In the development of most existing gait-based emotion recognition methods, feature fusion has not been sufficiently studied; therefore, these methods have failed to fully utilize various gait features, resulting in poor performance. In this study, an emotion recognition method is developed, based on the adaptive fusion of multiple gait features. In this method, spatiotemporal features, reconstructed features, and psychology-based affective features are extracted from gait data. Spatiotemporal features capture the dynamic changes in gait patterns, reconstructed features focus on the structural aspects of gait, and psychology-based affective features provide insights into an individual's emotional state. Subsequently, an adaptive fusion strategy is used to dynamically weigh the importance of the three gait features, thereby achieving a more comprehensive representation of the individual's emotional state. Finally, ten-fold cross-validation is performed on a dataset containing four emotion categories, followed by training and testing the model on a real-world emotion-gait dataset. Experimental results show that in multi-label classification tasks, the model proposed herein improves the mean Average Precision (mAP) by two percentage points compared with the state-of-the-art TAEW method. Furthermore, in multiclass classification tasks, the accuracy of the model proposed herein is 1.88 percentage point higher than that of the STEP method. These results indicate that the method effectively leverages the spatiotemporal, reconstructed, and psychology-based affective features of gait, thereby providing a robust and accurate approach to emotion recognition.

Key words: emotion recognition, gait features, spatial-temporal graph Convolutional Neural Network (CNN), autoencoder, adaptive fusion

摘要:

现有的多数步态情绪识别方法对特征融合研究不够深入, 未能充分利用步态的多种特征, 导致性能不佳。为此, 提出一种基于多步态特征自适应融合的情绪识别方法。首先从步态数据中提取时空特征、重构特征以及基于心理学的情感特征, 时空特征捕捉步态模式的动态变化, 重构特征关注步态的结构性信息, 而基于心理学的情感特征则提供个体情感状态的洞察; 其次对3个步态特征进行自适应融合, 动态权衡3种步态特征的重要性, 实现更全面的情感状态表征; 最后在包含4类情绪的数据集上进行十折交叉验证, 模型在真实的Emotion-Gait数据集上进行训练和测试。实验结果表明, 与现有最先进的TAEW方法相比, 该模型在多标签分类任务上的均值平均精度(mAP)指标提升了2百分点; 与STEP方法相比, 在多类别分类任务上的Accuracy指标提升了1.88百分点。该方法能够有效利用行人步态的时空特征、重构特征以及基于心理学的情感特征, 提供了一种鲁棒且准确的情绪识别方法。

关键词: 情绪识别, 步态特征, 时空图卷积神经网络, 自编码器, 自适应融合