作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2021, Vol. 47 ›› Issue (5): 213-220,228. doi: 10.19678/j.issn.1000-3428.0060053

• 图形图像处理 • 上一篇    下一篇

基于多尺度特征融合的人脸图像修复方法

白宗文1, 弋婷婷1, 周美丽1, 魏嵬2   

  1. 1. 延安大学 物理与电子信息学院, 陕西 延安 716000;
    2. 西安理工大学 计算机学院, 西安 710048
  • 收稿日期:2020-11-18 修回日期:2021-01-10 发布日期:2020-12-17
  • 作者简介:白宗文(1979-),男,副教授、博士,主研方向为深度学习;弋婷婷,硕士研究生;周美丽,副教授;魏嵬,副教授、博士。
  • 基金资助:
    国家自然科学基金(61761042,6194112);陕西省自然科学基金(2020JM-556);延安大学科研计划(CXY201909,YDZ2019-05);陕西省能源大数据智能处理重点实验室开放基金(IPBED10,IPBED7)。

Face Image Inpainting Method Based on Multi-Scale Feature Fusion

BAI Zongwen1, YI Tingting1, ZHOU Meili1, WEI Wei2   

  1. 1. School of Physics and Electronic Information, Yan'an University, Yan'an, Shaanxi 716000, China;
    2. School of Computer Science and Engineering, Xi'an University of Technology, Xi'an 710048, China
  • Received:2020-11-18 Revised:2021-01-10 Published:2020-12-17

摘要: 传统图像修复方法在修复受损区域较大的图像时会出现修复结果过于平滑或模糊的现象,并且较难重建合理的人脸图像结构。在传统生成对抗网络的鉴别器中引入多尺度特征融合方法,将不同深度的特征图经过上采样后直接相加,使浅层信息和深层信息有效结合。通过借助高层特征把握图像的整体规律,同时利用低层特征填充人脸图像的细节纹理,进而使一张图像的分辨率及其语义特征相互融合,实现有效的人脸图像修复。在CelebA数据集上的实验结果表明,该方法的峰值信噪比、相似性结构、L1损失指标均优于区域归一化方法,取得了较好的视觉效果。

关键词: 图像修复, 生成对抗网络, 多尺度特征融合, 鉴别器, 高层特征, 低层特征

Abstract: When dealing with severely corrupted images,the traditional image inpainting methods often produce blurry or too smooth regions in the restored images,and have difficulty in reconstructing a reasonable face image structure.To address the problem,this paper introduces a multi-scale feature fusion method into the discriminator of the traditional Generative Adversarial Network(GAN),which directly adds the upsampled feature maps of different depth to achieve effective fusion of shallow and deep information.Then this method grasps the overall pattern of the image with the help of high-level features,and fills the detail texture of the face image with low-level features,so that the resolution of the image can be fused with its semantic features for effective face image restoration.The experimental results on the dataset of CelebA show that the proposed method outperforms the regional normalization method in terms of Peak Signal to Noise Ratio(PSNR),structural similarity,L1 loss indicators,achieving ideal visual effects.

Key words: image inpainting, Generative Adversarial Network(GAN), multi-scale feature fusion, discriminator, high-level feature, low-level feature

中图分类号: