作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2023, Vol. 49 ›› Issue (5): 269-276. doi: 10.19678/j.issn.1000-3428.0066497

• 开发研究与工程应用 • 上一篇    下一篇

基于变换匹配层融合的双模态生物特征识别方法

孙妍, 胡龙, 冯雪玲   

  1. 上海大学 计算机工程与科学学院, 上海 200444
  • 收稿日期:2022-12-12 修回日期:2023-02-02 发布日期:2023-03-10
  • 作者简介:孙妍(1989-),女,讲师、博士,主研方向为计算机视觉、视频运动分析;胡龙、冯雪玲,硕士研究生。
  • 基金资助:
    上海市浦江人才计划项目(20PJ1404400)。

Dual-Modality Biometric Feature Recognition Method Based on Transform Matching Layer Fusion

SUN Yan, HU Long, FENG Xueling   

  1. School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
  • Received:2022-12-12 Revised:2023-02-02 Published:2023-03-10

摘要: 自新冠疫情发生以来,戴口罩预防疾病可能会成为大众的常态化行为。若大部分面部特征被遮挡会影响人脸识别方法的精度,同时距离也会对面部识别造成一定影响。然而,步态作为一种可远距离并且难以伪装的生物特征,容易受身体遮挡、角度等外部条件变化的影响。提出一种基于变换匹配层的识别方法,以融合步态和面部特征。通过步态特征提取网络提取人体轮廓图中具有区分度的时空生物特征,以解决单模态人脸识别技术难以在远距离条件下对带口罩目标进行识别的问题,采用面部特征提取网络提取脸部的细粒度特征,以增强网络对于目标主体轮廓被遮挡的鲁棒性。在匹配层将面部特征与步态特征进行归一化后再将信息融合,以达到特征互补的效果。此外,构建相关联的全局-局部时空特征提取模块。通过局部特征提取模块提取细粒度的步态特征,并采用基于互补掩码的多尺度随机带状分割策略增强各个局部特征之间的关联关系。全局特征提取模块提取全局步态信息,与局部细粒度信息形成互补,从而提高步态特征提取网络对于遮挡、视角变化的鲁棒性。实验结果表明,该方法的识别准确率达到99.16%,相较于步态、面部特征提取网络分别提高6.56和0.45个百分点,并且在远距离且戴口罩的真实场景下识别准确率达到94.52%,分别提升1.92和5.98个百分点。

关键词: 生物特征, 双模态, 特征融合, 步态识别, 面部识别

Abstract: Since the outbreak of COVID-19, wearing a mask has become a common practice for disease prevention. Obscured facial features as well as the distance from the subject can significantly reduce the precision of facial recognition technology.As a long-distance and difficult-to-disguise biometric feature,gait is also easily affected by changes in external conditions,such as body occlusion and angle changes.Thus,this paper proposes a recognition method based on transform matching layer by integrating gait and facial features.A network for gait feature extraction is employed to differentiate human body contours,thus extracting spatiotemporal biometric features,to overcome the limitation of facial recognition technologies under remote conditions when targets are wearing masks. A facial feature extraction network is used to extract fine-grained features of the face to enhance the robustness of the network when the contour of the target body is occluded. The facial and gait features are normalized in the matching layer,and then,the two sets of information are fused to achieve the effect of complementary features.In addition,an related global-local spatiotemporal feature extraction module is constructed.The local feature extraction module extracts fine-grained gait features and uses a multi-scale random band segmentation strategy based on complementary masks to enhance the association between various local features.The global feature extraction module is used to extract global gait information and complement local fine-grained information,thereby improving the robustness of the gait feature extraction network to occlusion and visual angle changes. Experimental results show that,the recognition accuracy of the proposed method reaches 99.16%,which is 6.56 and 0.45 percentage points higher than that of gait and facial feature extraction networks, respectively.Furthermore,the recognition accuracy in long-distance and masked real scenes reaches 94.52%,which is 1.92 and 5.98 percentage points higher,respectively.

Key words: biometric feature, dual-modality, feature fusion, gait recognition, facial recognition

中图分类号: