Avoiding False Positive in Multi-Instance Learning
Abstract
In multi-instance learning, there are two kinds of prediction failure, i.e., false negative and false positive. Current research mainly focus on avoding the former. We attempt to utilize the geometric distribution of instances inside positive bags to avoid both the former and the latter. Based on kernel principal component analysis, we define a projection constraint for each positive bag to classify its constituent instances far away from the separating hyperplane while place positive instances and negative instances at opposite sides. We apply the Constrained Concave-Convex Procedure to solve the resulted problem. Empirical results demonstrate that our approach offers improved generalization performance.
Cite
Text
Han et al. "Avoiding False Positive in Multi-Instance Learning." Neural Information Processing Systems, 2010.Markdown
[Han et al. "Avoiding False Positive in Multi-Instance Learning." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/han2010neurips-avoiding/)BibTeX
@inproceedings{han2010neurips-avoiding,
title = {{Avoiding False Positive in Multi-Instance Learning}},
author = {Han, Yanjun and Tao, Qing and Wang, Jue},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {811-819},
url = {https://mlanthology.org/neurips/2010/han2010neurips-avoiding/}
}