Graphics and Image Processing
Yuting WANG, Zhiming LIU, Yaping WAN, Tao ZHU
Image fusion is the process of combining multiple input images into a unified single image. Although visible-infrared image fusion enhances target detection accuracy, its performance often fails in low-light scenarios. This study introduces a novel fusion model, namely, DAPR-Net, which features an encoder-decoder structure with cross-layer residual connections. These connections link the encoder's output to the corresponding layer in the decoder to thereby reinforce the information flow between the convolutional layers. Within the encoder, a Dual Attention Feature Extraction Module(AFEM) is designed to better distinguish the differences between the fused image and the input visible light and infrared images while retaining crucial information from both. The experimental results show that compared with the benchmark PIAFusion model, the information entropy, spatial frequency, mean gradient, standard deviation, and visual fidelity indices on the model on LLVIP and MSRS datasets increase by 0.849, 3.252, 7.634, 10.38, and 0.293, and 2.105, 2.23, 4.099, 27.938, and 0.343, respectively. In the YOLOV5 target detection network, the average mean precision, recall rate, accuracy rate, and F1 value index of the LLVIP and MSRS datasets increased by 8.8, 1.4, 1.9, and 1.5 percentage points and 7.5, 1.4, 8.8, and 1.2 percentage points, respectively, showing significant advantages compared with other fusion methods.