Open Problem: Monotonicity of Learning
Abstract
We pose the question to what extent a learning algorithm behaves monotonically in the following sense: does it perform better, in expectation, when adding one instance to the training set? We focus on empirical risk minimization and illustrate this property with several examples, two where it does hold and two where it does not. We also relate it to the notion of PAC-learnability.
Cite
Text
Viering et al. "Open Problem: Monotonicity of Learning." Conference on Learning Theory, 2019.Markdown
[Viering et al. "Open Problem: Monotonicity of Learning." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/viering2019colt-open/)BibTeX
@inproceedings{viering2019colt-open,
title = {{Open Problem: Monotonicity of Learning}},
author = {Viering, Tom and Mey, Alexander and Loog, Marco},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {3198-3201},
volume = {99},
url = {https://mlanthology.org/colt/2019/viering2019colt-open/}
}