Reinforcement Learning with Parameterized Actions

Abstract

We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions—discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains.

Cite

Text

Masson et al. "Reinforcement Learning with Parameterized Actions." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10226

Markdown

[Masson et al. "Reinforcement Learning with Parameterized Actions." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/masson2016aaai-reinforcement/) doi:10.1609/AAAI.V30I1.10226

BibTeX

@inproceedings{masson2016aaai-reinforcement,
  title     = {{Reinforcement Learning with Parameterized Actions}},
  author    = {Masson, Warwick and Ranchod, Pravesh and Konidaris, George Dimitri},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {1934-1940},
  doi       = {10.1609/AAAI.V30I1.10226},
  url       = {https://mlanthology.org/aaai/2016/masson2016aaai-reinforcement/}
}