Domain Adaptive Imitation Learning with Visual Observation

Abstract

In this paper, we consider domain-adaptive imitation learning with visual observation, where an agent in a target domain learns to perform a task by observing expert demonstrations in a source domain. Domain adaptive imitation learning arises in practical scenarios where a robot, receiving visual sensory data, needs to mimic movements by visually observing other robots from different angles or observing robots of different shapes. To overcome the domain shift in cross-domain imitation learning with visual observation, we propose a novel framework for extracting domain-independent behavioral features from input observations that can be used to train the learner, based on dual feature extraction and image reconstruction. Empirical results demonstrate that our approach outperforms previous algorithms for imitation learning from visual observation with domain shift.

Cite

Text

Choi et al. "Domain Adaptive Imitation Learning with Visual Observation." Neural Information Processing Systems, 2023.

Markdown

[Choi et al. "Domain Adaptive Imitation Learning with Visual Observation." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/choi2023neurips-domain/)

BibTeX

@inproceedings{choi2023neurips-domain,
  title     = {{Domain Adaptive Imitation Learning with Visual Observation}},
  author    = {Choi, Sungho and Han, Seungyul and Kim, Woojun and Chae, Jongseong and Jung, Whiyoung and Sung, Youngchul},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/choi2023neurips-domain/}
}