Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

Abstract

Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena. However, existing generative models that approximate a multimodal ELBO rely on difficult or inefficient training schemes to learn a joint distribution and the dependencies between modalities. In this work, we propose a novel, efficient objective function that utilizes the Jensen-Shannon divergence for multiple distributions. It simultaneously approximates the unimodal and joint multimodal posteriors directly via a dynamic prior. In addition, we theoretically prove that the new multimodal JS-divergence (mmJSD) objective optimizes an ELBO. In extensive experiments, we demonstrate the advantage of the proposed mmJSD model compared to previous work in unsupervised, generative learning tasks.

Cite

Text

Sutter et al. "Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence." Neural Information Processing Systems, 2020.

Markdown

[Sutter et al. "Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/sutter2020neurips-multimodal/)

BibTeX

@inproceedings{sutter2020neurips-multimodal,
  title     = {{Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence}},
  author    = {Sutter, Thomas and Daunhawer, Imant and Vogt, Julia},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/sutter2020neurips-multimodal/}
}