Learning Programmatically Structured Representations with Perceptor Gradients
Abstract
We present the perceptor gradients algorithm -- a novel approach to learning symbolic representations based on the idea of decomposing an agent's policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.
Cite
Text
Penkov and Ramamoorthy. "Learning Programmatically Structured Representations with Perceptor Gradients." International Conference on Learning Representations, 2019.Markdown
[Penkov and Ramamoorthy. "Learning Programmatically Structured Representations with Perceptor Gradients." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/penkov2019iclr-learning/)BibTeX
@inproceedings{penkov2019iclr-learning,
title = {{Learning Programmatically Structured Representations with Perceptor Gradients}},
author = {Penkov, Svetlin and Ramamoorthy, Subramanian},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/penkov2019iclr-learning/}
}