Planning and Learning with Stochastic Action Sets
Abstract
In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities, and a sampling-based model for unknown distributions. We develop polynomial-time value and policy iteration methods for both cases, and provide a polynomial-time linear programming solution for the first case.
Cite
Text
Boutilier et al. "Planning and Learning with Stochastic Action Sets." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/650Markdown
[Boutilier et al. "Planning and Learning with Stochastic Action Sets." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/boutilier2018ijcai-planning/) doi:10.24963/IJCAI.2018/650BibTeX
@inproceedings{boutilier2018ijcai-planning,
title = {{Planning and Learning with Stochastic Action Sets}},
author = {Boutilier, Craig and Cohen, Alon and Hassidim, Avinatan and Mansour, Yishay and Meshi, Ofer and Mladenov, Martin and Schuurmans, Dale},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {4674-4682},
doi = {10.24963/IJCAI.2018/650},
url = {https://mlanthology.org/ijcai/2018/boutilier2018ijcai-planning/}
}