预训练模型特征提取的双对抗磁共振图像融合网络研究
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(62072274,U1909210);山东省科技成果转移转化项目(2021LYXZ011);浙江省重点研发计划(2021C01108)


Research on Dual-adversarial MR Image Fusion Network Based on Feature Extraction of Pre-trained Model
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着多模态医学图像在临床诊疗工作中的普及,建立在时空相关性特性基础上的融合技术得到快速发展,融合后的医学图像不仅可以保留各模态源图像的独有特征,而且能够强化互补信息、便于医生阅片.目前大多数方法采用人工定义约束的策略来实现特征提取和特征融合,这容易导致融合图像中部分有用信息丢失和细节不清晰等问题.为此,本文提出一种基于预训练模型特征提取的双对抗融合网络实现MR-T1/MR-T2图像的融合.该网络由一个特征提取模块、一个特征融合模块和两个鉴别网络模块组成.由于已配准的多模态医学图像数据集规模较小,无法对特征提取网络进行充分地训练,又因预训练模型具有强大的数据表征能力,故本文将预先训练的卷积神经网络模型嵌入到特征提取模块以生成特征图.然后,特征融合网络负责融合深度特征并输出融合图像.两个鉴别网络通过对源图像与融合图像进行准确分类,分别与特征融合网络建立对抗关系,最终激励其学习出最优的融合参数.实验结果证明了预训练技术在本文方法中的有效性,同时与现有的六种典型融合方法相比,本文方法融合结果在视觉效果和量化指标方面均取得最优表现.

    Abstract:

    With the popularization of multi-modal medical images in clinical diagnosis and treatment, the fusion technology based on spatial-temporal correlation characteristics has been developed rapidly. The fused medical images not only retain the unique features of source images with various modalities but also strengthen the complementary information, which can facilitate doctors to acquire more information. At present, most methods perform feature extraction and feature fusion by manually defining constraints, which easily causes the useful information loss and the unclear details in the fused image. In light of this, a dual-adversarial MR image fusion network based on feature extraction of pre-trained model is proposed in this paper. The network consists of a feature extraction module, a feature fusion module and two discriminator network modules. Due to the small scale of the registered multi-modal medical image dataset, the feature extraction network cannot be fully trained. And because the pre-trained model has powerful data representation ability, we embed the pre-trained convolutional neural network model into the feature extraction module to generate the feature map. The feature fusion network aims to fuse the deep features and output fused images. Through accurate classification of the source and fused images, the two discriminator networks respectively establish adversarial relations with feature fusion network, and finally encourage it to learn the optimal fusion parameters. The experimental results illustrate the effectiveness of pre-trained technology in this method. And compared with the six existing typical fusion methods, the fused results generated by our method have the best performance in visual effect and quantitative metrics.

    参考文献
    相似文献
    引证文献
引用本文

刘慧,李珊珊,高珊珊,邓凯,徐岗,张彩明.预训练模型特征提取的双对抗磁共振图像融合网络研究.软件学报,2023,(5):0

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-04-18
  • 最后修改日期:2022-05-29
  • 录用日期:
  • 在线发布日期: 2022-09-20
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号