Fighting Copycat Agents in Behavioral Cloning from Observation Histories

Abstract

Imitation learning trains policies to map from input observations to the actions that an expert would choose. In this setting, distribution shift frequently exacerbates the effect of misattributing expert actions to nuisance correlates among the observed variables. We observe that a common instance of this causal confusion occurs in partially observed settings when expert actions are strongly correlated over time: the imitator learns to cheat by predicting the expert's previous action, rather than the next action. To combat this "copycat problem", we propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action. In our experiments, our approach improves performance significantly across a variety of partially observed imitation learning tasks.

Cite

Text

Wen et al. "Fighting Copycat Agents in Behavioral Cloning from Observation Histories." Neural Information Processing Systems, 2020.

Markdown

[Wen et al. "Fighting Copycat Agents in Behavioral Cloning from Observation Histories." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/wen2020neurips-fighting/)

BibTeX

@inproceedings{wen2020neurips-fighting,
  title     = {{Fighting Copycat Agents in Behavioral Cloning from Observation Histories}},
  author    = {Wen, Chuan and Lin, Jierui and Darrell, Trevor and Jayaraman, Dinesh and Gao, Yang},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/wen2020neurips-fighting/}
}