Semi-Supervised Classification by Low Density Separation
Abstract
We believe that the cluster assumption is key to successful semi-supervised learning. Based on this, we propose three semi-supervised algorithms: 1. deriving graph-based distances that emphazise low density regions between clusters, followed by training a standard SVM; 2. optimizing the Transductive SVM objective function, which places the decision boundary in low density regions, by gradient descent; 3. combining the first two to make maximum use of the cluster assumption. We compare with state of the art algorithms and demonstrate superior accuracy for the latter two methods.
Cite
Text
Chapelle and Zien. "Semi-Supervised Classification by Low Density Separation." Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, 2005.Markdown
[Chapelle and Zien. "Semi-Supervised Classification by Low Density Separation." Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, 2005.](https://mlanthology.org/aistats/2005/chapelle2005aistats-semisupervised/)BibTeX
@inproceedings{chapelle2005aistats-semisupervised,
title = {{Semi-Supervised Classification by Low Density Separation}},
author = {Chapelle, Olivier and Zien, Alexander},
booktitle = {Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics},
year = {2005},
pages = {57-64},
volume = {R5},
url = {https://mlanthology.org/aistats/2005/chapelle2005aistats-semisupervised/}
}