作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2023, Vol. 49 ›› Issue (9): 234-245. doi: 10.19678/j.issn.1000-3428.0065678

• 图形图像处理 • 上一篇    下一篇

基于深度学习的CT-MR图像联合配准分割方法

洪犇1,2, 钱旭升2, 申明磊1, 胡冀苏2, 耿辰2, 戴亚康2, 周志勇2,*   

  1. 1. 南京理工大学 电子工程与光电技术学院, 南京 210094
    2. 中国科学院 苏州生物医学工程技术研究所, 江苏 苏州 215163
  • 收稿日期:2022-09-05 出版日期:2023-09-15 发布日期:2023-09-14
  • 通讯作者: 周志勇
  • 作者简介:

    洪犇(1997—),男,硕士,主研方向为图像处理

    钱旭升,硕士

    申明磊,副研究员、博士

    胡冀苏,博士

    耿辰,博士

    戴亚康, 研究员、博士

  • 基金资助:
    中国科学院青年创新促进会项目(2021324); 江苏省重点研发计划项目(BE2022049-2); 江苏省重点研发计划项目(BE2021053); 江苏省重点研发计划项目(BE2020625); 江苏省重点研发计划项目(BE2021612); 江苏省卫生健康委医学科研项目(M2020068); 苏州市医疗卫生科技创新项目(SKY2021031); 苏州市科技计划项目(SS202054); 丽水市科技计划项目(2020ZDYF09)

Joint Registration and Segmentation Method of CT-MR Images Based on Deep Learning

Ben HONG1,2, Xusheng QIAN2, Minglei SHEN1, Jisu HU2, Chen GENG2, Yakang DAI2, Zhiyong ZHOU2,*   

  1. 1. School of Electronic Engineering and Optoelectronic Technology, Nanjing University of Science and Technology, Nanjing 210094, China
    2. Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, Jiangsu, China
  • Received:2022-09-05 Online:2023-09-15 Published:2023-09-14
  • Contact: Zhiyong ZHOU

摘要:

医学图像配准和分割是医学图像分析中的两项重要任务,将其相结合可以有效提升两者的精度,但现有的单模态图像联合配准分割框架难以适用于多模态图像。针对以上问题,提出基于模态一致性监督和多尺度邻域描述符的CT-MR图像联合配准分割框架,包含一个多模态图像配准网络和两个分割网络。联合配准分割框架利用多模态图像配准产生的形变场在两种模态的分割结果之间建立对应的形变关系,并设计模态一致性监督损失,通过两个分割网络互相监督的方式提升多模态分割的精度。在多模态图像配准网络中,构建多尺度模态独立邻域描述符以增强跨模态信息表征能力,并将该描述符作为结构性损失项加入配准网络,更加准确地约束多模态图像的局部结构对应关系。在118例肝脏CT-MR多模态图像数据集上的实验结果表明,在仅提供30%分割标签的情况下,该方法的肝脏配准Dice相似系数(DSC)达到94.66(±0.84)%,目标配准误差达到5.191(±1.342) mm,CT和MR图像的肝脏分割DSC达到94.68(±0.82)%和94.12(±1.06)%,优于对比的配准方法和分割方法。

关键词: 多模态图像, 配准, 分割, 模态一致性监督, 多尺度邻域描述符

Abstract:

Medical image registration and segmentation are important tasks in medical image analysis.The accuracy of the tasks can be improved effectively by their combination.However, the existing joint registration and segmentation framework of single-modal images is difficult to apply to multi-modal images.To address these problems, a Computed Tomography-Magnetic Resonance(CT-MR) image-based joint registration and segmentation framework based on modality-consistent supervision and a multi-scale modality-independent neighborhood descriptor is proposed.It consists of a multimodal image registration network and two segmentation networks.The deformation field generated by the multi-modal registration is used to establish the corresponding deformation relationship between the segmentation network results of the two modalities.Modality consistency supervision loss is constructed, which improves the accuracy of multi-modal segmentation because the two segmentation networks supervise each other.In the multimodal image registration network, a multi-scale modality-independent neighborhood descriptor is constructed to enhance the representation ability of cross-modal information.The descriptor is added to the registration network as a structural loss term to constrain the local structure correspondence of multimodal images more accurately.Experiments were performed on a dataset of 118 CT-MR multimodal liver images.When 30% segmentation labels are provided, the Dice Similarity Coefficient(DSC) of liver registration of this method reaches 94.66(±0.84)%, and the Target Registration Error(TRE) reaches 5.191(±1.342) mm.The DSC of liver segmentation of this method reaches 94.68(±0.82)% and 94.12%(±1.06)% in CT and MR images.These results are superior to those of the comparable registration and segmentation method.

Key words: multi-modal images, registration, segmentation, modality consistency supervision, multi-scale neighborhood descriptor