Smooth Imitation Learning for Online Sequence Prediction

Abstract

We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present a learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield larger policy improvements compared to previous approaches, and the ability to ensure stable convergence. Our empirical results demonstrate significant performance gains over previous approaches.

Cite

Text

Le et al. "Smooth Imitation Learning for Online Sequence Prediction." International Conference on Machine Learning, 2016.

Markdown

[Le et al. "Smooth Imitation Learning for Online Sequence Prediction." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/le2016icml-smooth/)

BibTeX

@inproceedings{le2016icml-smooth,
  title     = {{Smooth Imitation Learning for Online Sequence Prediction}},
  author    = {Le, Hoang and Kang, Andrew and Yue, Yisong and Carr, Peter},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {680-688},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/le2016icml-smooth/}
}