Unpaired Face Restoration via Learnable Cross-Quality Shift

Abstract

Face restoration aims to recover high-quality (HQ) face images from low-quality (LQ) ones with various unknown degradations. Unpaired face restoration approaches focus on the adaptation to unseen degradations, which is a more challenging setting. Recently, generative facial priors of StyleGAN are used to improve the restoration capability of paired face restoration methods. For unpaired methods, however, using face priors is a challenge due to the lack of paired supervision. To address this issue, we take advantage of the editing capabilities of StyleGAN’s latent code and propose a novel learnable cross-quality shift. The proposed learnable cross-quality shift not only introduces the generative facial priors into the unpaired framework, but also enables the straight-forward addition/subtraction in the latent feature space to achieve quality conversion. Furthermore, we design a two-branch framework with the proposed cross-quality shift to deal with unpaired data and improve the fidelity of restoration. With the unpaired framework, our method can be fine-tuned on images with unseen degradation. Experimental results show that (i) compared to state-of-the-art methods, our method improves performances under moderate and severe degradation situations; and (ii) both the proposed learnable cross-quality shift and the two-branch framework benefit the restoration performance.

Cite

Text

Dong et al. "Unpaired Face Restoration via Learnable Cross-Quality Shift." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00082

Markdown

[Dong et al. "Unpaired Face Restoration via Learnable Cross-Quality Shift." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/dong2022cvprw-unpaired/) doi:10.1109/CVPRW56347.2022.00082

BibTeX

@inproceedings{dong2022cvprw-unpaired,
  title     = {{Unpaired Face Restoration via Learnable Cross-Quality Shift}},
  author    = {Dong, Yangyi and Zhang, Xiaoyun and Wang, Zhixin and Zhang, Ya and Chen, Siheng and Wang, Yanfeng},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {666-674},
  doi       = {10.1109/CVPRW56347.2022.00082},
  url       = {https://mlanthology.org/cvprw/2022/dong2022cvprw-unpaired/}
}