MDPs with Unawareness
Abstract
Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play near-optimally when it is possible to do so, as efficiently as possible. In particular, we characterize when a near-optimal solution can be found in polynomial time.
Cite
Text
Halpern et al. "MDPs with Unawareness." Conference on Uncertainty in Artificial Intelligence, 2010.Markdown
[Halpern et al. "MDPs with Unawareness." Conference on Uncertainty in Artificial Intelligence, 2010.](https://mlanthology.org/uai/2010/halpern2010uai-mdps/)BibTeX
@inproceedings{halpern2010uai-mdps,
title = {{MDPs with Unawareness}},
author = {Halpern, Joseph Y. and Rong, Nan and Saxena, Ashutosh},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2010},
pages = {228-235},
url = {https://mlanthology.org/uai/2010/halpern2010uai-mdps/}
}