Author Login Editor-in-Chief Peer Review Editor Work Office Work

Computer Engineering ›› 2022, Vol. 48 ›› Issue (12): 248-254. doi: 10.19678/j.issn.1000-3428.0062860

• Development Research and Engineering Application • Previous Articles     Next Articles

Polyp Segmentation Network GLIA-Net Based on Deep Learning

REN Lili1, BIAN Xuan2, WANG Guanglei2, WANG Hongrui2   

  1. 1. Internal Medicine-Oncology, Affiliated Hospital of Hebei University, Baoding, Hebei 071002, China;
    2. Department of Electronic Information Engineering, Hebei University, Baoding, Hebei 071002, China
  • Received:2021-10-03 Revised:2022-01-28 Published:2022-01-25

基于深度学习的息肉分割网络GLIA-Net

任莉莉1, 边璇2, 王光磊2, 王洪瑞2   

  1. 1. 河北大学附属医院 肿瘤内科, 河北 保定 071002;
    2. 河北大学 电子信息工程学院, 河北 保定 071002
  • 作者简介:任莉莉(1977—),女,博士,主研方向为恶性肿瘤诊治及机制、计算机视觉;边璇,硕士研究生;王光磊,副教授、博士;王洪瑞,教授、博士。
  • 基金资助:
    国家自然科学基金(61473112);河北省自然科学基金重点项目(F2017201222)。

Abstract: With the development of the convolutional neural network, current improved U-Net networks for polyp segmentation can effectively improve the accuracy of polyp segmentation.However, many parameters are introduced, increasing the model complexity and decreasing computational efficiency.A network GLIA-Net, with low complexity and high performance, is proposed to segment polyp regions in endoscopic images.Taking U-Net as the infrastructure, a global and local interactive attention fusion module is added after the double-layer convolution.Global attention is based on two learnable external memories, achieved through cascaded linear and normalization layers.Local attention is based on the local cross-channel interaction strategy, which replaces the full-connection layer with one-dimensional convolution, reducing the computational complexity while maintaining the network performance and accelerating the network computing speed.By combining the advantages of the Efficient Channel Attention(ECA) and External Attention(EA), local and global attention are fused without introducing many parameters and extensive computations.An attention mechanism is introduced in the dimensions of channel and space to extract rich multi-scale semantic information.The experimental results on the Kvasir dataset show that the Intersection over Union(IoU), Dice, and Volume Overlap Errors(VOE) of GLIA-Net are 69.4%, 80.7%, and 5.0%, respectively.Compared with ExfuseNet, SegNet, ResUNet, and other networks, GLIA-Net has improved segmentation accuracy while ensuring network computing efficiency.

Key words: convolutional neural network, U-Net network, polyp segmentation, endoscopic images, interactive attention fusion module

摘要: 随着卷积神经网络的发展,现有改进的息肉分割U-Net网络能有效提高息肉分割准确率,但引入了大量参数,导致模型复杂度增大且计算效率降低。提出具有低复杂度、高性能的网络GLIA-Net,用于分割内窥镜图像中的息肉区域。以U-Net为基础架构,在双层卷积后加入全局与局部交互式注意力融合模块。全局注意力基于2个可学习的外部储存器,通过2个级联的线性层和归一化层来实现。局部注意力基于局部跨通道交互策略,将一维卷积代替全连接层,在保证网络性能的同时降低计算复杂度,加快网络的计算速度。结合高效通道注意力和外部注意力的优点,在不引入过多参数量和计算量的前提下融合局部注意力和全局注意力,同时在通道与空间2个维度上引入注意力机制,提取丰富的多尺度语义信息。在Kvasir数据集上的实验结果表明,GLIA-Net的平均交并比、Dice、体积重叠误差分别为69.4%、80.7%和5.0%,与ExfuseNet、SegNet、ResUNet等网络相比,在保证网络计算效率的同时具有较优的分割精度。

关键词: 卷积神经网络, U-Net网络, 息肉分割, 内窥镜图像, 交互式注意力融合模块

CLC Number: