Simple Disentanglement of Style and Content in Visual Representations
Abstract
Learning visual representations with interpretable features, i.e., disentangled representations, remains a challenging problem. Existing methods demonstrate some success but are hard to apply to large-scale vision datasets like ImageNet. In this work, we propose a simple post-processing framework to disentangle content and style in learned representations from pre-trained vision models. We model the pre-trained features probabilistically as linearly entangled combinations of the latent content and style factors and develop a simple disentanglement algorithm based on the probabilistic model. We show that the method provably disentangles content and style features and verify its efficacy empirically. Our post-processed features yield significant domain generalization performance improvements when the distribution shift occurs due to style changes or style-related spurious correlations.
Cite
Text
Ngweta et al. "Simple Disentanglement of Style and Content in Visual Representations." International Conference on Machine Learning, 2023.Markdown
[Ngweta et al. "Simple Disentanglement of Style and Content in Visual Representations." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/ngweta2023icml-simple/)BibTeX
@inproceedings{ngweta2023icml-simple,
title = {{Simple Disentanglement of Style and Content in Visual Representations}},
author = {Ngweta, Lilian and Maity, Subha and Gittens, Alex and Sun, Yuekai and Yurochkin, Mikhail},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {26063-26086},
volume = {202},
url = {https://mlanthology.org/icml/2023/ngweta2023icml-simple/}
}