Adversarial Option-Aware Hierarchical Imitation Learning

Abstract

It has been a challenge to learning skills for an agent from long-horizon unannotated demonstrations. Existing approaches like Hierarchical Imitation Learning(HIL) are prone to compounding errors or suboptimal solutions. In this paper, we propose Option-GAIL, a novel method to learn skills at long horizon. The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization. In particular, we propose an Expectation-Maximization(EM)-style algorithm: an E-step that samples the options of expert conditioned on the current learned policy, and an M-step that updates the low- and high-level policies of agent simultaneously to minimize the newly proposed option-occupancy measurement between the expert and the agent. We theoretically prove the convergence of the proposed algorithm. Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.

Cite

Text

Jing et al. "Adversarial Option-Aware Hierarchical Imitation Learning." International Conference on Machine Learning, 2021.

Markdown

[Jing et al. "Adversarial Option-Aware Hierarchical Imitation Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/jing2021icml-adversarial/)

BibTeX

@inproceedings{jing2021icml-adversarial,
  title     = {{Adversarial Option-Aware Hierarchical Imitation Learning}},
  author    = {Jing, Mingxuan and Huang, Wenbing and Sun, Fuchun and Ma, Xiaojian and Kong, Tao and Gan, Chuang and Li, Lei},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5097-5106},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/jing2021icml-adversarial/}
}