Collaborative Multi-Output Gaussian Processes

Abstract

We introduce the collaborative multi-output Gaussian process (GP) model for learning dependent tasks with very large datasets. The model fosters task correlations by mixing sparse processes and sharing multiple sets of inducing points. This facilitates the applica-tion of variational inference and the deriva-tion of an evidence lower bound that decom-poses across inputs and outputs. We learn all the parameters of the model in a sin-gle stochastic optimization framework that scales to a large number of observations per output and a large number of outputs. We demonstrate our approach on a toy prob-lem, two medium-sized datasets and a large dataset. The model achieves superior per-formance compared to single output learn-ing and previous multi-output GP models, confirming the benefits of correlating spar-sity structure of the outputs via the inducing points. 1

Cite

Text

Nguyen and Bonilla. "Collaborative Multi-Output Gaussian Processes." Conference on Uncertainty in Artificial Intelligence, 2014.

Markdown

[Nguyen and Bonilla. "Collaborative Multi-Output Gaussian Processes." Conference on Uncertainty in Artificial Intelligence, 2014.](https://mlanthology.org/uai/2014/nguyen2014uai-collaborative/)

BibTeX

@inproceedings{nguyen2014uai-collaborative,
  title     = {{Collaborative Multi-Output Gaussian Processes}},
  author    = {Nguyen, Trung V. and Bonilla, Edwin V.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2014},
  pages     = {643-652},
  url       = {https://mlanthology.org/uai/2014/nguyen2014uai-collaborative/}
}