ManiFest: Manifold Deformation for Few-Shot Image Translation
Abstract
Most image-to-image translation methods require a large number of training images, which restricts their applicability. We instead propose ManiFest: a framework for few-shot image translation that learns a context-aware representation of a target domain from a few images only. To enforce feature consistency, our framework learns a style manifold between source and additional anchor domains (assumed to be composed of large numbers of images). The learned manifold is interpolated and deformed towards the few-shot target domain via patch-based adversarial and feature statistics alignment losses. All of these components are trained simultaneously during a single end-to-end loop. In addition to the general few-shot translation task, our approach can alternatively be conditioned on a single exemplar image to reproduce its specific style. Extensive experiments demonstrate the efficacy of ManiFest on multiple tasks, outperforming the state-of-the-art on all metrics. Our code is avaliable at https://github.com/cv-rits/ManiFest.
Cite
Text
Pizzati et al. "ManiFest: Manifold Deformation for Few-Shot Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19790-1_27Markdown
[Pizzati et al. "ManiFest: Manifold Deformation for Few-Shot Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/pizzati2022eccv-manifest/) doi:10.1007/978-3-031-19790-1_27BibTeX
@inproceedings{pizzati2022eccv-manifest,
title = {{ManiFest: Manifold Deformation for Few-Shot Image Translation}},
author = {Pizzati, Fabio and Lalonde, Jean-François and de Charette, Raoul},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19790-1_27},
url = {https://mlanthology.org/eccv/2022/pizzati2022eccv-manifest/}
}