Representation Discovery in Sequential Decision Making
Abstract
Automatically constructing novel representations of tasks from analysis of state spaces is a longstanding fundamental challenge in AI. I review recent progress on this problem for sequential decision making tasks modeled as Markov decision processes. Specifically, I discuss three classes of representation discovery problems: finding functional, state, and temporal abstractions. I describe solution techniques varying along several dimensions: diagonalization or dilation methods using approximate or exact transition models; reward-specific vs reward-invariant methods; global vs. local representation construction methods; multiscale vs. flat discovery methods; and finally, orthogonal vs. redundant representa- tion discovery methods. I conclude by describing a number of open problems for future work.
Cite
Text
Mahadevan. "Representation Discovery in Sequential Decision Making." AAAI Conference on Artificial Intelligence, 2010. doi:10.1609/AAAI.V24I1.7766Markdown
[Mahadevan. "Representation Discovery in Sequential Decision Making." AAAI Conference on Artificial Intelligence, 2010.](https://mlanthology.org/aaai/2010/mahadevan2010aaai-representation/) doi:10.1609/AAAI.V24I1.7766BibTeX
@inproceedings{mahadevan2010aaai-representation,
title = {{Representation Discovery in Sequential Decision Making}},
author = {Mahadevan, Sridhar},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2010},
pages = {1718-1721},
doi = {10.1609/AAAI.V24I1.7766},
url = {https://mlanthology.org/aaai/2010/mahadevan2010aaai-representation/}
}