作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2022, Vol. 48 ›› Issue (7): 234-240. doi: 10.19678/j.issn.1000-3428.0061890

• 图形图像处理 • 上一篇    下一篇

基于注意力与残差级联的红外与可见光图像融合方法

李晨, 侯进, 李金彪, 陈子锐   

  1. 西南交通大学 信息科学与技术学院, 成都 611756
  • 收稿日期:2021-06-09 修回日期:2021-08-25 出版日期:2022-07-15 发布日期:2021-08-27
  • 作者简介:李晨(1995—),男,硕士研究生,主研方向为图像融合、深度学习;侯进(通信作者),副教授、博士;李金彪、陈子锐,硕士研究生。
  • 基金资助:
    四川省科技计划项目(2020SYSY0016)。

Infrared and Visible Image Fusion Method Based on Attention and Residual Concatenation

LI Chen, HOU Jin, LI Jinbiao, CHEN Zirui   

  1. School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China
  • Received:2021-06-09 Revised:2021-08-25 Online:2022-07-15 Published:2021-08-27

摘要: 红外与可见光图像融合是在复杂环境中获得高质量目标图像的一种有效手段,广泛应用于目标检测、人脸识别等领域。传统的红外与可见光图像融合方法未充分利用图像的关键信息,导致融合图像的视觉效果不佳、背景细节信息丢失。针对该问题,提出基于注意力与残差级联的端到端融合方法。将源图像输入到生成器中,通过层次特征提取模块提取源图像的层次特征,基于U-net连接的解码器融合层次特征并生成初始融合图像。将生成器与输入预融合图像的判别器进行对抗训练,同时利用细节损失函数优化生成器,补充融合图像缺失的信息。此外,在判别器中,采用谱归一化技术提高生成对抗网络训练的稳定性。实验结果表明,该方法的信息熵、标准差、互信息、空间频率分别为7.118 2、46.629 2、14.236 3和20.321,相比FusionGAN、LP、STDFusionNet等融合方法,能够充分提取源图像的信息,所得图像具有较优的视觉效果和图像质量。

关键词: 图像融合, 特征提取, 红外图像, 注意力, 生成对抗网络

Abstract: Infrared and visible image fusion is an effective means to obtain high-quality target image in complex environment.It is widely used in target detection, face recognition and other fields.Traditional infrared and visible image fusion methods do not make full use of key information of the image, resulting in poor visual effect of the fused image and loss of background details.To solve this problem, an end-to-end fusion method based on the attention and residual concatenation is proposed.This paper inputs the source image into the generator, extract the hierarchical features of the source image through the hierarchical feature extract block, fuse the hierarchical features, and generate initial fused image based on the decoder connected by U-net.The generator that generates the initial fusion image is trained against the discriminator that inputs the prefusion image.At the same time, the generator is optimized by using the detail loss function to supplement the missing information of the fusion image.In addition, in the discriminator, spectral normalization technology is used to improve the stability of Generative Adversarial Network(GAN) training.The experimental results show that the information entropy, standard deviation, mutual information, and spatial frequency of this method are 7.118 2, 46.629 2, 14.236 3, and 20.321, respectively.Compared with fusion methods such as FusionGAN, LP, and STDFusionNet, this method can fully extract the information of the source image, and has better visual effect and image quality.

Key words: image fusion, feature extraction, infrared image, attention, Generative Adversarial Network(GAN)

中图分类号: