Explicit Mean-Square Error Bounds for Monte-Carlo and Linear Stochastic Approximation

Abstract

This paper concerns error bounds for recursive equations subject to Markovian disturbances. Motivating examples abound within the fields of Markov chain Monte Carlo (MCMC) and Reinforcement Learning (RL), and many of these algorithms can be interpreted as special cases of stochastic approximation (SA). It is argued that it is not possible in general to obtain a Hoeffding bound on the error sequence, even when the underlying Markov chain is reversible and geometrically ergodic, such as the M/M/1 queue. This is motivation for the focus on mean square error bounds for parameter estimates. It is shown that mean square error achieves the optimal rate of $O(1/n)$, subject to conditions on the step-size sequence. Moreover, the exact constants in the rate are obtained, which is of great value in algorithm design.

Cite

Text

Chen et al. "Explicit Mean-Square Error Bounds for Monte-Carlo and Linear Stochastic Approximation." Artificial Intelligence and Statistics, 2020.

Markdown

[Chen et al. "Explicit Mean-Square Error Bounds for Monte-Carlo and Linear Stochastic Approximation." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/chen2020aistats-explicit/)

BibTeX

@inproceedings{chen2020aistats-explicit,
  title     = {{Explicit Mean-Square Error Bounds for Monte-Carlo and Linear Stochastic Approximation}},
  author    = {Chen, Shuhang and Devraj, Adithya and Busic, Ana and Meyn, Sean},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {4173-4183},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/chen2020aistats-explicit/}
}