Learning Sparse Representations in Reinforcement Learning with Sparse Coding
Abstract
A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse coding objective for policy evaluation. Despite the non-convexity of this objective, we prove that all local minima are global minima, making the approach amenable to simple optimization strategies. We empirically show that it is key to use a supervised objective, rather than the more straightforward unsupervised sparse coding approach. We then compare the learned representations to a canonical fixed sparse representation, called tile-coding, demonstrating that the sparse coding representation outperforms a wide variety of tile-coding representations.
Cite
Text
Le et al. "Learning Sparse Representations in Reinforcement Learning with Sparse Coding." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/287Markdown
[Le et al. "Learning Sparse Representations in Reinforcement Learning with Sparse Coding." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/le2017ijcai-learning/) doi:10.24963/IJCAI.2017/287BibTeX
@inproceedings{le2017ijcai-learning,
title = {{Learning Sparse Representations in Reinforcement Learning with Sparse Coding}},
author = {Le, Lei and Kumaraswamy, Raksha and White, Martha},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {2067-2073},
doi = {10.24963/IJCAI.2017/287},
url = {https://mlanthology.org/ijcai/2017/le2017ijcai-learning/}
}