Hierarchical Imitation Learning with Vector Quantized Models

Abstract

The ability to plan actions on multiple levels of abstraction enables intelligent agents to solve complex tasks effectively. However, learning the models for both low and high-level planning from demonstrations has proven challenging, especially with higher-dimensional inputs. To address this issue, we propose to use reinforcement learning to identify subgoals in expert trajectories by associating the magnitude of the rewards with the predictability of low-level actions given the state and the chosen subgoal. We build a vector-quantized generative model for the identified subgoals to perform subgoal-level planning. In experiments, the algorithm excels at solving complex, long-horizon decision-making problems outperforming state-of-the-art. Because of its ability to plan, our algorithm can find better trajectories than the ones in the training set.

Cite

Text

Kujanpää et al. "Hierarchical Imitation Learning with Vector Quantized Models." International Conference on Machine Learning, 2023.

Markdown

[Kujanpää et al. "Hierarchical Imitation Learning with Vector Quantized Models." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/kujanpaa2023icml-hierarchical/)

BibTeX

@inproceedings{kujanpaa2023icml-hierarchical,
  title     = {{Hierarchical Imitation Learning with Vector Quantized Models}},
  author    = {Kujanpää, Kalle and Pajarinen, Joni and Ilin, Alexander},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {17896-17919},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/kujanpaa2023icml-hierarchical/}
}