Bayesian Context Aggregation for Neural Processes
Abstract
Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results. In this paper, we focus on context aggregation, a central component of such architectures, which fuses information from multiple context data points. So far, this aggregation operation has been treated separately from the inference of a latent representation of the target function in CLV models. Our key contribution is to combine these steps into one holistic mechanism by phrasing context aggregation as a Bayesian inference problem. The resulting Bayesian Aggregation (BA) mechanism enables principled handling of task ambiguity, which is key for efficiently processing context information. We demonstrate on a range of challenging experiments that BA consistently improves upon the performance of traditional mean aggregation while remaining computationally efficient and fully compatible with existing NP-based models.
Cite
Text
Volpp et al. "Bayesian Context Aggregation for Neural Processes." International Conference on Learning Representations, 2021.Markdown
[Volpp et al. "Bayesian Context Aggregation for Neural Processes." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/volpp2021iclr-bayesian/)BibTeX
@inproceedings{volpp2021iclr-bayesian,
title = {{Bayesian Context Aggregation for Neural Processes}},
author = {Volpp, Michael and Flürenbrock, Fabian and Grossberger, Lukas and Daniel, Christian and Neumann, Gerhard},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/volpp2021iclr-bayesian/}
}