Semi-Supervised Learning via Generalized Maximum Entropy
Abstract
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.
Cite
Text
Erkan and Altun. "Semi-Supervised Learning via Generalized Maximum Entropy." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.Markdown
[Erkan and Altun. "Semi-Supervised Learning via Generalized Maximum Entropy." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.](https://mlanthology.org/aistats/2010/erkan2010aistats-semisupervised/)BibTeX
@inproceedings{erkan2010aistats-semisupervised,
title = {{Semi-Supervised Learning via Generalized Maximum Entropy}},
author = {Erkan, Ayse and Altun, Yasemin},
booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics},
year = {2010},
pages = {209-216},
volume = {9},
url = {https://mlanthology.org/aistats/2010/erkan2010aistats-semisupervised/}
}