Learning Multimodal Latent Generative Models with Energy-Based Prior
Abstract
Multimodal generative models have recently gained significant attention for their ability to learn representations across various modalities, enhancing joint and cross-generation coherence. However, most existing works use standard Gaussian or Laplacian distributions as priors, which may struggle to capture the diverse information inherent in multiple data types due to their unimodal and less informative nature. Energy-based models (EBMs), known for their expressiveness and flexibility across various tasks, have yet to be thoroughly explored in the context of multimodal generative models. In this paper, we propose a novel framework that integrates the multimodal latent generative model with the EBM. Both models can be trained jointly through a variational scheme. This approach results in a more expressive and informative prior, better-capturing of information across multiple modalities. Our experiments validate the proposed model, demonstrating its superior generation coherence. Keywords: EBM · Multimodal latent generative model
Cite
Text
Yuan et al. "Learning Multimodal Latent Generative Models with Energy-Based Prior." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73024-5_6Markdown
[Yuan et al. "Learning Multimodal Latent Generative Models with Energy-Based Prior." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yuan2024eccv-learning/) doi:10.1007/978-3-031-73024-5_6BibTeX
@inproceedings{yuan2024eccv-learning,
title = {{Learning Multimodal Latent Generative Models with Energy-Based Prior}},
author = {Yuan, Shiyu and Cui, Jiali and Li, Hanao and Han, Tian},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73024-5_6},
url = {https://mlanthology.org/eccv/2024/yuan2024eccv-learning/}
}