Refactoring Policy for Compositional Generalizability Using Self-Supervised Object Proposals
Abstract
We study how to learn a policy with compositional generalizability. We propose a two-stage framework, which refactorizes a high-reward teacher policy into a generalizable student policy with strong inductive bias. Particularly, we implement an object-centric GNN-based student policy, whose input objects are learned from images through self-supervised learning. Empirically, we evaluate our approach on four difficult tasks that require compositional generalizability, and achieve superior performance compared to baselines.
Cite
Text
Mu et al. "Refactoring Policy for Compositional Generalizability Using Self-Supervised Object Proposals." Neural Information Processing Systems, 2020.Markdown
[Mu et al. "Refactoring Policy for Compositional Generalizability Using Self-Supervised Object Proposals." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/mu2020neurips-refactoring/)BibTeX
@inproceedings{mu2020neurips-refactoring,
title = {{Refactoring Policy for Compositional Generalizability Using Self-Supervised Object Proposals}},
author = {Mu, Tongzhou and Gu, Jiayuan and Jia, Zhiwei and Tang, Hao and Su, Hao},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/mu2020neurips-refactoring/}
}