Multi-Source Neural Variational Inference
Abstract
Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.
Cite
Text
Kurle et al. "Multi-Source Neural Variational Inference." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33014114Markdown
[Kurle et al. "Multi-Source Neural Variational Inference." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/kurle2019aaai-multi/) doi:10.1609/AAAI.V33I01.33014114BibTeX
@inproceedings{kurle2019aaai-multi,
title = {{Multi-Source Neural Variational Inference}},
author = {Kurle, Richard and Günnemann, Stephan and van der Smagt, Patrick},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {4114-4121},
doi = {10.1609/AAAI.V33I01.33014114},
url = {https://mlanthology.org/aaai/2019/kurle2019aaai-multi/}
}