Learning Theory for Conditional Risk Minimization
Abstract
In this work we study the learnability of stochastic processes with respect to the conditional risk, i.e. the existence of a learning algorithm that improves its next-step performance with the amount of observed data. We introduce a notion of pairwise discrepancy between conditional distributions at different times steps and show how certain properties of these discrepancies can be used to construct a successful learning algorithm. Our main results are two theorems that establish criteria for learnability for many classes of stochastic processes, including all special cases studied previously in the literature.
Cite
Text
Zimin and Lampert. "Learning Theory for Conditional Risk Minimization." International Conference on Artificial Intelligence and Statistics, 2017.Markdown
[Zimin and Lampert. "Learning Theory for Conditional Risk Minimization." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/zimin2017aistats-learning/)BibTeX
@inproceedings{zimin2017aistats-learning,
title = {{Learning Theory for Conditional Risk Minimization}},
author = {Zimin, Alexander and Lampert, Christoph H.},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2017},
pages = {213-222},
url = {https://mlanthology.org/aistats/2017/zimin2017aistats-learning/}
}