Recursive Disentanglement Network
Abstract
Disentangled feature representation is essential for data-efficient learning. The feature space of deep models is inherently compositional. Existing $\beta$-VAE-based methods, which only apply disentanglement regularization to the resulting embedding space of deep models, cannot effectively regularize such compositional feature space, resulting in unsatisfactory disentangled results. In this paper, we formulate the compositional disentanglement learning problem from an information-theoretic perspective and propose a recursive disentanglement network (RecurD) that propagates regulatory inductive bias recursively across the compositional feature space during disentangled representation learning. Experimental studies demonstrate that RecurD outperforms $\beta$-VAE and several of its state-of-the-art variants on disentangled representation learning and enables more data-efficient downstream machine learning tasks.
Cite
Text
Chen et al. "Recursive Disentanglement Network." International Conference on Learning Representations, 2022.Markdown
[Chen et al. "Recursive Disentanglement Network." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/chen2022iclr-recursive/)BibTeX
@inproceedings{chen2022iclr-recursive,
title = {{Recursive Disentanglement Network}},
author = {Chen, Yixuan and Shi, Yubin and Li, Dongsheng and Wang, Yujiang and Dong, Mingzhi and Zhao, Yingying and Dick, Robert P. and Lv, Qin and Yang, Fan and Shang, Li},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/chen2022iclr-recursive/}
}