A Coupled Flow Approach to Imitation Learning

Abstract

In reinforcement learning and imitation learning, an object of central importance is the state distribution induced by the policy. It plays a crucial role in the policy gradient theorem, and references to it–along with the related state-action distribution–can be found all across the literature. Despite its importance, the state distribution is mostly discussed indirectly and theoretically, rather than being modeled explicitly. The reason being an absence of appropriate density estimation tools. In this work, we investigate applications of a normalizing flow based model for the aforementioned distributions. In particular, we use a pair of flows coupled through the optimality point of the Donsker-Varadhan representation of the Kullback-Leibler (KL) divergence, for distribution matching based imitation learning. Our algorithm, Coupled Flow Imitation Learning (CFIL), achieves state-of-the-art performance on benchmark tasks with a single expert trajectory and extends naturally to a variety of other settings, including the subsampled and state-only regimes.

Cite

Text

Freund et al. "A Coupled Flow Approach to Imitation Learning." International Conference on Machine Learning, 2023.

Markdown

[Freund et al. "A Coupled Flow Approach to Imitation Learning." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/freund2023icml-coupled/)

BibTeX

@inproceedings{freund2023icml-coupled,
  title     = {{A Coupled Flow Approach to Imitation Learning}},
  author    = {Freund, Gideon Joseph and Sarafian, Elad and Kraus, Sarit},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {10357-10372},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/freund2023icml-coupled/}
}