Large Scale Manifold Transduction
Abstract
We show how the regularizer of Transductive Support Vector Machines (TSVM) can be trained by stochastic gradient descent for linear models and multi-layer architectures. The resulting methods can be trained online, have vastly superior training and testing speed to existing TSVM algorithms, can encode prior knowledge in the network architecture, and obtain competitive error rates. We then go on to propose a natural generalization of the TSVM loss function that takes into account neighborhood and manifold information directly, unifying the two-stage Low Density Separation method into a single criterion, and leading to state-of-the-art results.
Cite
Text
Karlen et al. "Large Scale Manifold Transduction." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390213Markdown
[Karlen et al. "Large Scale Manifold Transduction." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/karlen2008icml-large/) doi:10.1145/1390156.1390213BibTeX
@inproceedings{karlen2008icml-large,
title = {{Large Scale Manifold Transduction}},
author = {Karlen, Michael and Weston, Jason and Erkan, Ayse and Collobert, Ronan},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {448-455},
doi = {10.1145/1390156.1390213},
url = {https://mlanthology.org/icml/2008/karlen2008icml-large/}
}