Keyframe-Focused Visual Imitation Learning
Abstract
Imitation learning trains control policies by mimicking pre-recorded expert demonstrations. In partially observable settings, imitation policies must rely on observation histories, but many seemingly paradoxical results show better performance for policies that only access the most recent observation. Recent solutions ranging from causal graph learning to deep information bottlenecks have shown promising results, but failed to scale to realistic settings such as visual imitation. We propose a solution that outperforms these prior approaches by upweighting demonstration keyframes corresponding to expert action changepoints. This simple approach easily scales to complex visual imitation settings. Our experimental results demonstrate consistent performance improvements over all baselines on image-based Gym MuJoCo continuous control tasks. Finally, on the CARLA photorealistic vision-based urban driving simulator, we resolve a long-standing issue in behavioral cloning for driving by demonstrating effective imitation from observation histories. Supplementary materials and code at: \url{https://tinyurl.com/imitation-keyframes}.
Cite
Text
Wen et al. "Keyframe-Focused Visual Imitation Learning." International Conference on Machine Learning, 2021.Markdown
[Wen et al. "Keyframe-Focused Visual Imitation Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/wen2021icml-keyframefocused/)BibTeX
@inproceedings{wen2021icml-keyframefocused,
title = {{Keyframe-Focused Visual Imitation Learning}},
author = {Wen, Chuan and Lin, Jierui and Qian, Jianing and Gao, Yang and Jayaraman, Dinesh},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {11123-11133},
volume = {139},
url = {https://mlanthology.org/icml/2021/wen2021icml-keyframefocused/}
}