A Regularization Framework for Multiple-Instance Learning

Abstract

This paper focuses on kernel methods for multi-instance learning. Existing methods require the prediction of the bag to be identical to the maximum of those of its individual instances. However, this is too restrictive as only the sign is important in classification. In this paper, we provide a more complete regularization framework for MI learning by allowing the use of different loss functions between the outputs of a bag and its associated instances. This is especially important as we generalize this for multi-instance regression. Moreover, both bag and instance information can now be directly used in the optimization. Instead of using heuristics to solve the resultant nonlinear optimization problem, we use the constrained concave-convex procedure which has well-studied convergence properties. Experiments on both classification and regression data sets show that the proposed method leads to improved performance.

Cite

Text

Cheung and Kwok. "A Regularization Framework for Multiple-Instance Learning." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143869

Markdown

[Cheung and Kwok. "A Regularization Framework for Multiple-Instance Learning." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/cheung2006icml-regularization/) doi:10.1145/1143844.1143869

BibTeX

@inproceedings{cheung2006icml-regularization,
  title     = {{A Regularization Framework for Multiple-Instance Learning}},
  author    = {Cheung, Pak-Ming and Kwok, James T.},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {193-200},
  doi       = {10.1145/1143844.1143869},
  url       = {https://mlanthology.org/icml/2006/cheung2006icml-regularization/}
}