Variational Mixture of Gaussian Process Experts
Abstract
Mixture of Gaussian processes models extended a single Gaussian process with ability of modeling multi-modal data and reduction of training complexity. Previous inference algorithms for these models are mostly based on Gibbs sampling, which can be very slow, particularly for large-scale data sets. We present a new generative mixture of experts model. Each expert is still a Gaussian process but is reformulated by a linear model. This breaks the dependency among training outputs and enables us to use a much faster variational Bayesian algorithm for training. Our gating network is more flexible than previous generative approaches as inputs for each expert are modeled by a Gaussian mixture model. The number of experts and number of Gaussian components for an expert are inferred automatically. A variety of tests show the advantages of our method.
Cite
Text
Yuan and Neubauer. "Variational Mixture of Gaussian Process Experts." Neural Information Processing Systems, 2008.Markdown
[Yuan and Neubauer. "Variational Mixture of Gaussian Process Experts." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/yuan2008neurips-variational/)BibTeX
@inproceedings{yuan2008neurips-variational,
title = {{Variational Mixture of Gaussian Process Experts}},
author = {Yuan, Chao and Neubauer, Claus},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {1897-1904},
url = {https://mlanthology.org/neurips/2008/yuan2008neurips-variational/}
}