Large Margin vs. Large Volume in Transductive Learning
Abstract
We focus on distribution-free transductive learning . In this setting the learning algorithm is given a ‘full sample’ of unlabeled points. Then, a training sample is selected uniformly at random from the full sample and the labels of the training points are revealed. The goal is to predict the labels of the remaining unlabeled points as accurately as possible. The full sample partitions the transductive hypothesis space into a finite number of equivalence classes . All hypotheses in the same equivalence class, generate the same dichotomy of the full sample. We consider a large volume principle, whereby the priority of each equivalence class is proportional to its “volume” in the hypothesis space.
Cite
Text
El-Yaniv et al. "Large Margin vs. Large Volume in Transductive Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008. doi:10.1007/978-3-540-87479-9_8Markdown
[El-Yaniv et al. "Large Margin vs. Large Volume in Transductive Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008.](https://mlanthology.org/ecmlpkdd/2008/elyaniv2008ecmlpkdd-large/) doi:10.1007/978-3-540-87479-9_8BibTeX
@inproceedings{elyaniv2008ecmlpkdd-large,
title = {{Large Margin vs. Large Volume in Transductive Learning}},
author = {El-Yaniv, Ran and Pechyony, Dmitry and Vapnik, Vladimir},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2008},
pages = {9-10},
doi = {10.1007/978-3-540-87479-9_8},
url = {https://mlanthology.org/ecmlpkdd/2008/elyaniv2008ecmlpkdd-large/}
}