RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning

Abstract

We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration. This results in reducing the bounds on the gamma-discounted infinite horizon performance loss by a factor of 1/(1-gamma) while preserving polynomial-time computational complexity. Our experimental results show that using the robust representation can significantly improve the solution quality with minimal additional computational cost.

Cite

Text

Petrik and Subramanian. "RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning." Neural Information Processing Systems, 2014.

Markdown

[Petrik and Subramanian. "RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/petrik2014neurips-raam/)

BibTeX

@inproceedings{petrik2014neurips-raam,
  title     = {{RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning}},
  author    = {Petrik, Marek and Subramanian, Dharmashankar},
  booktitle = {Neural Information Processing Systems},
  year      = {2014},
  pages     = {1979-1987},
  url       = {https://mlanthology.org/neurips/2014/petrik2014neurips-raam/}
}