F-GAIL: Learning F-Divergence for Generative Adversarial Imitation Learning

Abstract

Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose f-GAIL – a new generative adversarial imitation learning model – that automatically learns a discrepancy measure from the f-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, f-GAIL learns better policies with higher data efficiency in six physics-based control tasks.

Cite

Text

Zhang et al. "F-GAIL: Learning F-Divergence for Generative Adversarial Imitation Learning." Neural Information Processing Systems, 2020.

Markdown

[Zhang et al. "F-GAIL: Learning F-Divergence for Generative Adversarial Imitation Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zhang2020neurips-fgail/)

BibTeX

@inproceedings{zhang2020neurips-fgail,
  title     = {{F-GAIL: Learning F-Divergence for Generative Adversarial Imitation Learning}},
  author    = {Zhang, Xin and Li, Yanhua and Zhang, Ziming and Zhang, Zhi-Li},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/zhang2020neurips-fgail/}
}