Xinyi Zhang   Hang Dong   Zhe Hu   Wei-Sheng Lai   Fei Wang   Ming-Hsuan Yang

Abstract

Single-image super-resolution is a fundamental task for vision applications to enhance the image quality with respect to spatial resolution. If the input image contains degraded pixels, the artifacts caused by the degradation could be amplified by super- resolution methods. Image blur is a common degradation source. Images captured by moving or still cameras are inevitably affected by motion blur due to relative movements between sensors and objects. In this work, we focus on the super-resolution task with the presence of motion blur. We propose a deep gated fusion convolution neural network to generate a clear high-resolution frame from a single natural image with severe blur. By decomposing the feature extraction step into two task-independent streams, the dual- branch design can facilitate the training process by avoiding learning the mixed degra- dation all-in-one and thus enhance the final high-resolution prediction results. Extensive experiments demonstrate that our method generates sharper super-resolved images from low-resolution inputs with high computational efficiency.

archi
feature


Technical Papers and Codes

In order to obtain a more stable training process, now we adopt a three-step training strategy, which differs from our paper and improves PSNR from 27.74dB to 27.81dB on LR-GOPRO 4x dataset.

Model LR-GOPRO 4x PSNR(dB) Time(s)
SCGAN 22.74 0.66
SRResNet 24.40 0.07
ED-DSRN 26.44 0.10
DeepDeblur + EDSR 25.09 2.70
EDSR + DeepDeblur 26.35 8.10
GFN(BMVC paper) 27.74 0.07
GFN(Now) 27.81 0.07

Performance Versus Inference Time and Model Parameters

performance


Visual Results on LR-GOPRO

  1. result1
  2. result2

Visual Results on Real Datasets[5]

result3


Reference

  1. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas. DeblurGAN: Blind motion deblurring using conditional adversarial networks. In CVPR, 2018.
  2. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, 2017.
  3. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. Enhanced deep residual networks for single image super-resolution. In CVPR Workshops, 2017.
  4. S. Nah, T. H. Kim, and K. M. Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017.
  5. S. Su, M. Delbracio, J. Wang, G. Sapiro, W. Heidrich, and O. Wang. Deep video deblurring. In CVPR, 2017.
  6. X. Xu, D. Sun, J. Pan, Y. Zhang, H. Pfister, and M. H. Yang. Learning to super-resolve blurry face and text images. In ICCV, 2017.
  7. X. Zhang, F. Wang, H. Dong, and Y. Guo. A deep encoder-decoder networks for joint deblurring and super-resolution. In ICASSP, 2018.