Discovering Generalizable Spatial Goal Representations via Graph-Based Active Reward Learning
Abstract
In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.
Cite
Text
Netanyahu et al. "Discovering Generalizable Spatial Goal Representations via Graph-Based Active Reward Learning." ICLR 2022 Workshops: OSC, 2022.Markdown
[Netanyahu et al. "Discovering Generalizable Spatial Goal Representations via Graph-Based Active Reward Learning." ICLR 2022 Workshops: OSC, 2022.](https://mlanthology.org/iclrw/2022/netanyahu2022iclrw-discovering/)BibTeX
@inproceedings{netanyahu2022iclrw-discovering,
title = {{Discovering Generalizable Spatial Goal Representations via Graph-Based Active Reward Learning}},
author = {Netanyahu, Aviv and Shu, Tianmin and Tenenbaum, Joshua B. and Agrawal, Pulkit},
booktitle = {ICLR 2022 Workshops: OSC},
year = {2022},
url = {https://mlanthology.org/iclrw/2022/netanyahu2022iclrw-discovering/}
}