Asymptotic Optimality of Transductive Confidence Machine
Abstract
Transductive Confidence Machine (TCM) is a way of converting standard machine-learning algorithms into algorithms that output predictive regions rather than point predictions. It has been shown recently that TCM is well-calibrated when used in the on-line mode: at any confidence level 1 - σ, the long-run relative frequency of errors is guaranteed not to exceed σ provided the examples are generated independently from the same probability distribution P . Therefore, the number of “uncertain” predictive regions (i.e., those containing more than one label) becomes the sole measure of performance. The main result of this paper is that for any probability distribution P (assumed to generate the examples), it is possible to construct a TCM (guaranteed to be wellcalibrated even if the assumption is wrong) that performs asymptotically as well as the best region predictor under P .
Cite
Text
Vovk. "Asymptotic Optimality of Transductive Confidence Machine." International Conference on Algorithmic Learning Theory, 2002. doi:10.1007/3-540-36169-3_27Markdown
[Vovk. "Asymptotic Optimality of Transductive Confidence Machine." International Conference on Algorithmic Learning Theory, 2002.](https://mlanthology.org/alt/2002/vovk2002alt-asymptotic/) doi:10.1007/3-540-36169-3_27BibTeX
@inproceedings{vovk2002alt-asymptotic,
title = {{Asymptotic Optimality of Transductive Confidence Machine}},
author = {Vovk, Vladimir},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2002},
pages = {336-350},
doi = {10.1007/3-540-36169-3_27},
url = {https://mlanthology.org/alt/2002/vovk2002alt-asymptotic/}
}