Metrics for Finite Markov Decision Processes

Abstract

We present metrics for measuring the similarity of states in a finite Markov decision process (MDP). The formulation of our metrics is based on the notion of bisimulation for MDPs, with an aim towards solving discounted infinite horizon reinforcement learning tasks. Such metrics can be used to aggregate states, as well as to better structure other value function approximators (e.g., memory-based or nearest-neighbor approximators). We provide bounds that relate our metric distances to the optimal values of states in the given MDP.

Cite

Text

Ferns et al. "Metrics for Finite Markov Decision Processes." Conference on Uncertainty in Artificial Intelligence, 2004.

Markdown

[Ferns et al. "Metrics for Finite Markov Decision Processes." Conference on Uncertainty in Artificial Intelligence, 2004.](https://mlanthology.org/uai/2004/ferns2004uai-metrics/)

BibTeX

@inproceedings{ferns2004uai-metrics,
  title     = {{Metrics for Finite Markov Decision Processes}},
  author    = {Ferns, Norm and Panangaden, Prakash and Precup, Doina},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2004},
  pages     = {162-169},
  url       = {https://mlanthology.org/uai/2004/ferns2004uai-metrics/}
}