Learning to Learn with the Informative Vector Machine
Abstract
This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedilyselecting the most informative examples from the separate tasks. The MT-IVM isalso shown to be more efficient than random sub-sampling on an artificialdata-set and more effective than the traditional IVM in a speaker dependentphoneme recognition task.
Cite
Text
Lawrence and Platt. "Learning to Learn with the Informative Vector Machine." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015382Markdown
[Lawrence and Platt. "Learning to Learn with the Informative Vector Machine." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/lawrence2004icml-learning/) doi:10.1145/1015330.1015382BibTeX
@inproceedings{lawrence2004icml-learning,
title = {{Learning to Learn with the Informative Vector Machine}},
author = {Lawrence, Neil D. and Platt, John C.},
booktitle = {International Conference on Machine Learning},
year = {2004},
doi = {10.1145/1015330.1015382},
url = {https://mlanthology.org/icml/2004/lawrence2004icml-learning/}
}