Continuity of Performance Metrics for Thin Feature Maps
Abstract
We study the class of hypothesis composed of linear functionals superimposed with smooth feature maps. We show that for “typical” smooth feature map, the pointwise convergence of hypothesis implies the convergence of some standard metrics such as error rate or area under ROC curve with probability 1 in selection of the test sample from a (Lebesgue measurable) probability density. Proofs use transversality theory. The crux is to show that for every “typical”, sufficiently smooth feature map into a finite dimensional vector space, the counter-image of every affine hyperplane has Lebesgue measure 0. The results extend to every real analytic, in particular polynomial, feature map if its domain is connected and the limit hypothesis is non-constant. In the process we give an elementary proof of the fundamental lemma that locus of zeros of a real analytic function on a connected domain either fills the whole space or forms a subset of measure 0.
Cite
Text
Kowalczyk. "Continuity of Performance Metrics for Thin Feature Maps." International Conference on Algorithmic Learning Theory, 2007. doi:10.1007/978-3-540-75225-7_28Markdown
[Kowalczyk. "Continuity of Performance Metrics for Thin Feature Maps." International Conference on Algorithmic Learning Theory, 2007.](https://mlanthology.org/alt/2007/kowalczyk2007alt-continuity/) doi:10.1007/978-3-540-75225-7_28BibTeX
@inproceedings{kowalczyk2007alt-continuity,
title = {{Continuity of Performance Metrics for Thin Feature Maps}},
author = {Kowalczyk, Adam},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2007},
pages = {343-357},
doi = {10.1007/978-3-540-75225-7_28},
url = {https://mlanthology.org/alt/2007/kowalczyk2007alt-continuity/}
}