Geometry and Stability of Supervised Learning Problems
Abstract
We introduce a notion of distance between supervised learning problems, which we call the Risk distance. This distance, inspired by optimal transport, facilitates stability results; one can quantify how seriously issues like sampling bias, noise, limited data, and approximations might change a given problem by bounding how much these modifications can move the problem under the Risk distance. With the distance established, we explore the geometry of the resulting space of supervised learning problems, providing explicit geodesics and proving that the set of classification problems is dense in a larger class of problems. We also provide two variants of the Risk distance: one that incorporates specified weights on a problem's predictors, and one that is more sensitive to the contours of a problem's risk landscape.
Cite
Text
Mémoli et al. "Geometry and Stability of Supervised Learning Problems." Journal of Machine Learning Research, 2025.Markdown
[Mémoli et al. "Geometry and Stability of Supervised Learning Problems." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/memoli2025jmlr-geometry/)BibTeX
@article{memoli2025jmlr-geometry,
title = {{Geometry and Stability of Supervised Learning Problems}},
author = {Mémoli, Facundo and Vose, Brantley and Williamson, Robert C.},
journal = {Journal of Machine Learning Research},
year = {2025},
pages = {1-99},
volume = {26},
url = {https://mlanthology.org/jmlr/2025/memoli2025jmlr-geometry/}
}