Learning to Hallucinate Face Images via Component Generation and Enhancement

Abstract

We propose a two-stage method for face hallucination. First, we generate facial components of the input image using CNNs. These components represent the basic facial structures. Second, we synthesize fine-grained facial structures from high resolution training images. The details of these structures are transferred into facial components for enhancement. Therefore, we generate facial components to approximate ground truth global appearance in the first stage and enhance them through recovering details in the second stage. The experiments demonstrate that our method performs favorably against state-of-the-art methods.

Cite

Text

Song et al. "Learning to Hallucinate Face Images via Component Generation and Enhancement." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/633

Markdown

[Song et al. "Learning to Hallucinate Face Images via Component Generation and Enhancement." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/song2017ijcai-learning/) doi:10.24963/IJCAI.2017/633

BibTeX

@inproceedings{song2017ijcai-learning,
  title     = {{Learning to Hallucinate Face Images via Component Generation and Enhancement}},
  author    = {Song, Yibing and Zhang, Jiawei and He, Shengfeng and Bao, Linchao and Yang, Qingxiong},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {4537-4543},
  doi       = {10.24963/IJCAI.2017/633},
  url       = {https://mlanthology.org/ijcai/2017/song2017ijcai-learning/}
}