SW-VAE: Weakly Supervised Learn Disentangled Representation via Latent Factor Swapping
Abstract
Representation disentanglement is an important goal of the representation learning that benefits various of downstream tasks. To achieve this goal, many unsupervised learning representation disentanglement approaches have been developed. However, the training process without utilizing any supervision signal have been proved to be inadequate for disentanglement representation learning. Therefore, we propose a novel weakly-supervised training approach, named as SW-VAE , which incorporates pairs of input observations as supervision signal by using the generative factors of datasets. Furthermore, we introduce strategies to gradually increase the learning difficulty during training to smooth the training process. As shown on several datasets, our model shows significant improvement over state-of-the-art (SOTA) methods on representation disentanglement tasks.
Cite
Text
Zhu et al. "SW-VAE: Weakly Supervised Learn Disentangled Representation via Latent Factor Swapping." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25063-7_5Markdown
[Zhu et al. "SW-VAE: Weakly Supervised Learn Disentangled Representation via Latent Factor Swapping." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/zhu2022eccvw-swvae/) doi:10.1007/978-3-031-25063-7_5BibTeX
@inproceedings{zhu2022eccvw-swvae,
title = {{SW-VAE: Weakly Supervised Learn Disentangled Representation via Latent Factor Swapping}},
author = {Zhu, Jiageng and Xie, Hanchen and Abd-Almageed, Wael},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {73-87},
doi = {10.1007/978-3-031-25063-7_5},
url = {https://mlanthology.org/eccvw/2022/zhu2022eccvw-swvae/}
}