New Inference Strategies for Solving Markov Decision Processes Using Reversible Jump MCMC

Abstract

In this paper we build on previous work which uses inferences techniques, in particular Markov Chain Monte Carlo (MCMC) methods, to solve parameterized control problems. We propose a number of modifications in order to make this approach more practical in general, higher-dimensional spaces. We first introduce a new target distribution which is able to incorporate more reward information from sampled trajectories. We also show how to break strong correlations between the policy parameters and sampled trajectories in order to sample more freely. Finally, we show how to incorporate these techniques in a principled manner to obtain estimates of the optimal policy.

Cite

Text

Hoffman et al. "New Inference Strategies for Solving Markov Decision Processes Using Reversible Jump MCMC." Conference on Uncertainty in Artificial Intelligence, 2009.

Markdown

[Hoffman et al. "New Inference Strategies for Solving Markov Decision Processes Using Reversible Jump MCMC." Conference on Uncertainty in Artificial Intelligence, 2009.](https://mlanthology.org/uai/2009/hoffman2009uai-new/)

BibTeX

@inproceedings{hoffman2009uai-new,
  title     = {{New Inference Strategies for Solving Markov Decision Processes Using Reversible Jump MCMC}},
  author    = {Hoffman, Matthias and Kück, Hendrik and de Freitas, Nando and Doucet, Arnaud},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2009},
  pages     = {223-231},
  url       = {https://mlanthology.org/uai/2009/hoffman2009uai-new/}
}