Variational Interaction Information Maximization for Cross-Domain Disentanglement

Abstract

Cross-domain disentanglement is the problem of learning representations partitioned into domain-invariant and domain-specific representations, which is a key to successful domain transfer or measuring semantic distance between two domains. Grounded in information theory, we cast the simultaneous learning of domain-invariant and domain-specific representations as a joint objective of multiple information constraints, which does not require adversarial training or gradient reversal layers. We derive a tractable bound of the objective and propose a generative model named Interaction Information Auto-Encoder (IIAE). Our approach reveals insights on the desirable representation for cross-domain disentanglement and its connection to Variational Auto-Encoder (VAE). We demonstrate the validity of our model in the image-to-image translation and the cross-domain retrieval tasks. We further show that our model achieves the state-of-the-art performance in the zero-shot sketch based image retrieval task, even without external knowledge.

Cite

Text

Hwang et al. "Variational Interaction Information Maximization for Cross-Domain Disentanglement." Neural Information Processing Systems, 2020.

Markdown

[Hwang et al. "Variational Interaction Information Maximization for Cross-Domain Disentanglement." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/hwang2020neurips-variational/)

BibTeX

@inproceedings{hwang2020neurips-variational,
  title     = {{Variational Interaction Information Maximization for Cross-Domain Disentanglement}},
  author    = {Hwang, HyeongJoo and Kim, Geon-Hyeong and Hong, Seunghoon and Kim, Kee-Eung},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/hwang2020neurips-variational/}
}