Hilbert Space Embeddings of POMDPs

Abstract

A nonparametric approach for policy learning for POMDPs is proposed. The approach represents distributions over the states, observations, and actions as embeddings in feature spaces, which are reproducing kernel Hilbert spaces. Distributions over states given the observations are obtained by applying the kernel Bayes' rule to these distribution embeddings. Policies and value functions are defined on the feature space over states, which leads to a feature space expression for the Bellman equation. Value iteration may then be used to estimate the optimal value function and associated policy. Experimental results confirm that the correct policy is learned using the feature space representation.

Cite

Text

Nishiyama et al. "Hilbert Space Embeddings of POMDPs." Conference on Uncertainty in Artificial Intelligence, 2012.

Markdown

[Nishiyama et al. "Hilbert Space Embeddings of POMDPs." Conference on Uncertainty in Artificial Intelligence, 2012.](https://mlanthology.org/uai/2012/nishiyama2012uai-hilbert/)

BibTeX

@inproceedings{nishiyama2012uai-hilbert,
  title     = {{Hilbert Space Embeddings of POMDPs}},
  author    = {Nishiyama, Yu and Boularias, Abdeslam and Gretton, Arthur and Fukumizu, Kenji},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2012},
  pages     = {644-653},
  url       = {https://mlanthology.org/uai/2012/nishiyama2012uai-hilbert/}
}