Author Login Editor-in-Chief Peer Review Editor Work Office Work

Computer Engineering

   

Multi-branch Clothes-changing Person Re-identification with feature fusion and channel attention

  

  • Published:2024-04-16

结合特征融合和通道注意力的多分支换装行人重识别

Abstract: Clothes-changing person Re-identification (CC Re-ID) is an emerging research topic on person re-identification, which aims to retrieve pedestrians who have changed their clothes. This task has not been fully studied to date. At present, the proposed methods mainly focus on the method of using multi-modal data to assist decoupling representation learning, such as decoupling the pedestrian's own attributes through auxiliary data such as face, gait and body contour to reduce the influence of clothing, but the generalization ability is poor and lot of additional work is needed to obtain auxiliary information. However, the method using only the original data is not enough to extract the relevant information, and the performance of the model is weak. To solve the problem of CC RE-ID, a new multi-branch CC RE-ID method combining feature fusion and channel attention (MBFC) is proposed. This method integrates channel attention mechanism into the backbone network to learn key information at the feature channel level, and designs local and global feature fusion methods to improve the network's ability to extract fine-grained pedestrian features. In addition, the model adopts a multi-branch structure and uses multiple loss functions such as clothing counter loss and smooth label cross-entropy loss to guide the model to learn information unrelated to clothing, reduce the influence of clothing on the model, and thus extract more effective pedestrian information. The model method in this study is extensively tested on the PRCC dataset and VC-Clothes dataset. The experimental results show that the performance of the proposed model is superior to most advanced CC RE-ID methods in RANK-1 and mAP.

摘要: 换装行人重识别(Clothes-changing Person Re-identification, CC RE-ID)是行人重识别的一个新兴的研究课题,旨在找回被换衣的行人,该课题尚未得到充分研究。当前提出的方法主要集中在使用多模态数据辅助解耦表征学习的方法上,如通过脸、步态、身体轮廓等辅助数据解耦行人自身属性以减少服装影响,但泛化能力较差,需要大量额外工作。而仅使用原始数据的方法对于相关信息的提取能力还不够充分,性能较弱。针对换装行人重识别存在的问题,提出了一种结合特征融合和通道注意力的多分支换装行人重识别方法(MBFC)。该方法通过在主干网络中融入通道注意力机制在特征通道层面学习关键信息,设计局部与全局特征融合方法提高网络对行人细粒度特征的提取能力。此外,MBFC模型采用多分支结构,使用服装对抗损失和交叉熵标签平滑损失等多种损失函数引导模型学习与服装无关的信息,减少服装对模型的影响,从而提取到更有效的行人信息。本研究模型方法在PRCC数据集和VC-Clothes数据集上进行了广泛实验。实验结果表明,本文所提出的模型性能在RANK-1和mAP指标上优于当前换装行人重识别先进方法。