Piecewise Linear Value Function Approximation for Factored MDPs
Abstract
A number of proposals have been put forth in recent years for the solution of Markov decision processes (MDPs) whose state (and sometimes action) spaces are factored. One recent class of methods involves linear value function approximation, where the optimal value function is assumed to be a linear combination of some set of basis functions, with the aim of finding suitable weights. While sophisticated techniques have been developed for finding the best approximation within this constrained space, few methods have been proposed for choosing a suitable basis set, or modifying it if solution quality is found wanting. We propose a general framework, and specific proposals, that address both of these questions. In particular, we examine weakly coupled MDPs where a number of subtasks can be viewed independently modulo resource constraints. We then describe methods for constructing a piecewise linear combination of the subtask value functions, using greedy decision tree techniques. We argue that this architecture is suitable for many types of MDPs whose combinatorics are determined largely by the existence multiple conflicting objectives.
Cite
Text
Poupart et al. "Piecewise Linear Value Function Approximation for Factored MDPs." AAAI Conference on Artificial Intelligence, 2002. doi:10.5555/777092.777140Markdown
[Poupart et al. "Piecewise Linear Value Function Approximation for Factored MDPs." AAAI Conference on Artificial Intelligence, 2002.](https://mlanthology.org/aaai/2002/poupart2002aaai-piecewise/) doi:10.5555/777092.777140BibTeX
@inproceedings{poupart2002aaai-piecewise,
title = {{Piecewise Linear Value Function Approximation for Factored MDPs}},
author = {Poupart, Pascal and Boutilier, Craig and Patrascu, Relu and Schuurmans, Dale},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2002},
pages = {292-299},
doi = {10.5555/777092.777140},
url = {https://mlanthology.org/aaai/2002/poupart2002aaai-piecewise/}
}