Abstract:With the popularization of multi-modal medical images in clinical diagnosis and treatment, the fusion technology based on spatial-temporal correlation characteristics has been developed rapidly. The fused medical images not only retain the unique features of source images with various modalities but also strengthen the complementary information, which can facilitate doctors to acquire more information. At present, most methods perform feature extraction and feature fusion by manually defining constraints, which easily causes the useful information loss and the unclear details in the fused image. In light of this, a dual-adversarial MR image fusion network based on feature extraction of pre-trained model is proposed in this paper. The network consists of a feature extraction module, a feature fusion module and two discriminator network modules. Due to the small scale of the registered multi-modal medical image dataset, the feature extraction network cannot be fully trained. And because the pre-trained model has powerful data representation ability, we embed the pre-trained convolutional neural network model into the feature extraction module to generate the feature map. The feature fusion network aims to fuse the deep features and output fused images. Through accurate classification of the source and fused images, the two discriminator networks respectively establish adversarial relations with feature fusion network, and finally encourage it to learn the optimal fusion parameters. The experimental results illustrate the effectiveness of pre-trained technology in this method. And compared with the six existing typical fusion methods, the fused results generated by our method have the best performance in visual effect and quantitative metrics.