Best-Buddy GANs for Highly Detailed Image Super-Resolution

Abstract

We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input. Recently, generative adversarial networks (GANs) become popular to hallucinate details. Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the ill-posed SISR task. Also, GAN-generated fake details may often undermine the realism of the whole image. We address these issues by proposing best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the rigid one-to-one constraint, we allow the estimated patches to dynamically seek trustworthy surrogates of supervision during training, which is beneficial to producing more reasonable details. Besides, we propose a region-aware adversarial learning strategy that directs our model to focus on generating details for textured areas adaptively. Extensive experiments justify the effectiveness of our method. An ultra-high-resolution 4K dataset is also constructed to facilitate future super-resolution research.

Cite

Text

Li et al. "Best-Buddy GANs for Highly Detailed Image Super-Resolution." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I2.20030

Markdown

[Li et al. "Best-Buddy GANs for Highly Detailed Image Super-Resolution." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/li2022aaai-best/) doi:10.1609/AAAI.V36I2.20030

BibTeX

@inproceedings{li2022aaai-best,
  title     = {{Best-Buddy GANs for Highly Detailed Image Super-Resolution}},
  author    = {Li, Wenbo and Zhou, Kun and Qi, Lu and Lu, Liying and Lu, Jiangbo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {1412-1420},
  doi       = {10.1609/AAAI.V36I2.20030},
  url       = {https://mlanthology.org/aaai/2022/li2022aaai-best/}
}