模糊感知引导的渐进图像去模糊
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金(62101084); 重庆自然科学基金(CSTB2023NSCQ-BHX0187, CSTB2023NSCQ-LZX0085); 重庆市教委科学技术研究项目(KJQN202200638); 重庆市博士后研究项目(2022CQBSHTB2052)


Progressive Image Deblurring Guided by Blur Perception
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    图像去模糊因其在安防监控、医学影像处理、遥感图像处理等领域的广泛应用而备受关注. 尽管基于端到端的方法已经取得了巨大的进步, 但单一的U-Net网络难以处理复杂的运动模糊, 而基于辅助任务的恢复方式往往存在参数量大的问题. 此外, 绝大多数方法均无法实现对不同图像模糊位置和模糊程度的精确识别, 而模糊感知往往是决定模型恢复性能的关键因素之一. 受此启发, 提出一种模糊感知引导的渐进图像去模糊算法(progressive image deblurring guided by blur perception, PDBP-Net), 该算法的主要思想是利用辅助任务来生成模糊感知特征图从而引导算法实现更精细的恢复. 首先, 高频差异与图像残差生成子网络(high frequency difference and image residual generative subnetwork, HDIRG-net)采用辅助学习的方式同时生成高频差异特征图和残差图; 然后, 将两者输入高频差异引导的模糊感知模块(blur perception module guided by high-frequency differences, BPGHD)深入融合并提取模糊信息, 从而生成模糊感知特征图. 此外, 为了缓解单网络对复杂场景难恢复的问题, 本模块利用残差图和模糊图生成初步恢复图; 最后, 模糊感知指导的细节恢复子网络(blur perception guided detail restoration subnetwork, BPGDR-net)在模糊感知特征图的指导下对初步恢复图进行有针对性的再优化, 从而生成最终的恢复图. 提出的去模糊模型在多个基准数据集上进行了广泛的评估, 并且与最先进的去模糊方法相比, 取得了显著的优势. 特别地, 在GoPro数据集上, 峰值信噪比(PSNR)和结构相似性(SSIM)值达到了33.85 dB和0.967, 其中PSNR指标高于第2名0.39 dB. 大量实验表明, PDBP-Net优于最先进的辅助学习方法, 并显著提升了图像去模糊的性能, 印证了所提方法的有效性.

    Abstract:

    Image deblurring has attracted much attention due to its wide applications in fields such as security surveillance, medical image processing, and remote sensing image processing. Although end-to-end methods have made significant progress, a single U-Net network struggles to handle complex motion blur, while restoration approaches based on auxiliary tasks often suffer from large parameter sizes. In addition, the vast majority of methods fail to accurately identify the locations and degrees of blur in different images, while blur perception is often one of the key factors determining the restoration performance of models. Inspired by this, this study proposes a progressive image deblurring algorithm guided by blur perception (PDBP-Net). The main idea of the algorithm is to utilize auxiliary tasks to generate blur perception feature maps, thus guiding the algorithm to achieve more refined restoration. First, the high-frequency difference and image residual generative subnetwork (HDIRG-net) employs auxiliary learning to simultaneously generate high-frequency difference feature maps and residual maps. These are then fed into the blur perception module guided by high-frequency differences (BPGHD) for deep fusion and extraction of blur-related information, resulting in the generation of blur perception feature maps. Moreover, to alleviate the limitations of a single network in restoring complex scenes, this module uses the residual maps and blur maps to generate preliminary restored images. Finally, the blur perception-guided detail restoration subnetwork (BPGDR-net) conducts targeted re-optimization of the preliminary restored images under the guidance of the blur perception feature maps, thus generating the final restored images. The proposed deblurring model is extensively evaluated on multiple benchmark datasets and achieves significant improvements over state-of-the-art deblurring methods. Specifically, on the GoPro dataset, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) reach 33.85 dB and 0.967, respectively, with the PSNR being 0.39 dB higher than that of the second-best method. Extensive experiments demonstrate that PDBP-Net outperforms state-of-the-art auxiliary learning-based methods and significantly enhances image deblurringperformance, confirming the effectiveness of the proposed method.

    参考文献
    相似文献
    引证文献
引用本文

胡波,田镕澳,郑佳,龚兵兵,高新波.模糊感知引导的渐进图像去模糊.软件学报,2026,37(5):2309-2324

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-10-18
  • 最后修改日期:2025-03-18
  • 录用日期:
  • 在线发布日期: 2025-12-03
  • 出版日期: 2026-05-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号