Infinite Latent SVM for Classification and Multi-Task Learning
Abstract
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes' theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the large-margin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multi-task learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both large-margin learning and Bayesian nonparametrics.
Cite
Text
Zhu et al. "Infinite Latent SVM for Classification and Multi-Task Learning." Neural Information Processing Systems, 2011.Markdown
[Zhu et al. "Infinite Latent SVM for Classification and Multi-Task Learning." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/zhu2011neurips-infinite/)BibTeX
@inproceedings{zhu2011neurips-infinite,
title = {{Infinite Latent SVM for Classification and Multi-Task Learning}},
author = {Zhu, Jun and Chen, Ning and Xing, Eric P.},
booktitle = {Neural Information Processing Systems},
year = {2011},
pages = {1620-1628},
url = {https://mlanthology.org/neurips/2011/zhu2011neurips-infinite/}
}