Random Permutation Online Isotonic Regression
Abstract
We revisit isotonic regression on linear orders, the problem of fitting monotonic functions to best explain the data, in an online setting. It was previously shown that online isotonic regression is unlearnable in a fully adversarial model, which lead to its study in the fixed design model. Here, we instead develop the more practical random permutation model. We show that the regret is bounded above by the excess leave-one-out loss for which we develop efficient algorithms and matching lower bounds. We also analyze the class of simple and popular forward algorithms and recommend where to look for algorithms for online isotonic regression on partial orders.
Cite
Text
Kotlowski et al. "Random Permutation Online Isotonic Regression." Neural Information Processing Systems, 2017.Markdown
[Kotlowski et al. "Random Permutation Online Isotonic Regression." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/kotlowski2017neurips-random/)BibTeX
@inproceedings{kotlowski2017neurips-random,
title = {{Random Permutation Online Isotonic Regression}},
author = {Kotlowski, Wojciech and Koolen, Wouter M. and Malek, Alan},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {4180-4189},
url = {https://mlanthology.org/neurips/2017/kotlowski2017neurips-random/}
}