Gaussian Processes Multiple Instance Learning
Abstract
This paper proposes a multiple instance learning (MIL) algorithm for Gaussian processes (GP). The GP-MIL model inherits two crucial benefits from GP: (i) a principle manner of learning kernel parameters, and (ii) a probabilistic interpretation (e.g., variance in prediction) that is informative for better understanding of the MIL prediction problem. The bag labeling protocol of the MIL problem, namely the existence of a positive instance in a bag, can be effectively represented by a sigmoid likelihood model through the max function over GP latent variables. To circumvent the intractability of exact GP inference and learning incurred by the non-continuous max function, we suggest two approximations: first, the soft-max approximation; second, the use of witness indicator variables optimized with a deterministic annealing schedule. The effectiveness of GP-MIL against other state-of-the-art MIL approaches is demonstrated on several benchmark MIL datasets.
Cite
Text
Kim and De la Torre. "Gaussian Processes Multiple Instance Learning." International Conference on Machine Learning, 2010.Markdown
[Kim and De la Torre. "Gaussian Processes Multiple Instance Learning." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/kim2010icml-gaussian/)BibTeX
@inproceedings{kim2010icml-gaussian,
title = {{Gaussian Processes Multiple Instance Learning}},
author = {Kim, Minyoung and De la Torre, Fernando},
booktitle = {International Conference on Machine Learning},
year = {2010},
pages = {535-542},
url = {https://mlanthology.org/icml/2010/kim2010icml-gaussian/}
}