###
Journal of Software:2021.32(2):519-550

基于U-Net结构改进的医学影像分割技术综述
殷晓航,王永才,李德英
(中国人民大学 信息学院, 北京 100872)
Suvery of Medical Image Segmentation Technology Based on U-Net Structure Improvement
YIN Xiao-Hang,WANG Yong-Cai,LI De-Ying
(School of Information, Renmin University of China, Beijing 100872, China)
Abstract
Chart / table
Reference
Similar Articles
Article :Browse 1463   Download 2439
Received:May 09, 2020    Revised:June 02, 2020
> 中文摘要: 深度学习在医学影像分割领域得到广泛应用,其中,2015年提出的U-Net因其分割小目标效果较好、结构具有可扩展性,自提出以来受到广泛关注.近年来,随着医学图像割性能要求的提升,众多学者针对U-Net结构也在不断地改进和扩展,比如编解码器的改进、外接特征金字塔等.通过对基于U-Net结构改进的医学影像分割技术,从面向性能优化和面向结构改进两个方面进行总结,对相关方法进行了综述、分类和总结,并介绍图像分割中常用的损失函数、评价参数和模块,进而总结了针对不同目标改进U-Net结构的思路和方法,为相关研究提供了参考.
Abstract:The application of deep learning in the field of medical image segmentation has attracted great attentions, among which the U-Net proposed in 2015 has been widely concerned because of its good segmentation effect and scalable structure. In recent years, with the improvement of the performance requirements of medical image segmentation, many scholars are improving and expanding the U-Net structure, such as the improvement of encoder-decoder, or the external feature pyramid, and so on. In this study, the medical image segmentation technology based on U-Net structure improvement is summarized from the aspects of performance-oriented optimization and structure-oriented improvement. Related methods are reviewed, classified and summarized. The paper evaluates the parameters and modules, and then summarizes the ideas and methods for improving the U-Net structure for different goals, which provides references for related research.
文章编号:     中图分类号:TP391    文献标志码:
基金项目:国家自然科学基金(61972404,61672524,11671400) 国家自然科学基金(61972404,61672524,11671400)
Foundation items:National Natural Science Foundation of China (61972404, 61672524, 11671400)
Reference text:

殷晓航,王永才,李德英.基于U-Net结构改进的医学影像分割技术综述.软件学报,2021,32(2):519-550

YIN Xiao-Hang,WANG Yong-Cai,LI De-Ying.Suvery of Medical Image Segmentation Technology Based on U-Net Structure Improvement.Journal of Software,2021,32(2):519-550