Chain-of-Thought Predictive Control

Abstract

We study generalizable policy learning from demonstrations for complex low-level control tasks (e.g., contact-rich object manipulations). We propose an imitation learning method that incorporates the idea of temporal abstraction and the planning capabilities from Hierarchical RL (HRL) in a novel and effective manner. As a step towards decision foundation models, our design can utilize scalable, albeit highly sub-optimal, demonstrations. Specifically, we find certain short subsequences of the demos, i.e. the chain-of-thought (CoT), reflect their hierarchical structures by marking the completion of subgoals in the tasks. Our model learns to dynamically predict the entire CoT as coherent and structured long-term action guidance and consistently outperforms typical two-stage subgoal-conditioned policies. On the other hand, such CoT facilitates generalizable policy learning as they exemplify the decision patterns shared among demos (even those with heavy noises and randomness). Our method, Chain-of-Thought Predictive Control (CoTPC), significantly outperforms existing ones on challenging low-level manipulation tasks from scalable yet highly sub-optimal demos.

Cite

Text

Jia et al. "Chain-of-Thought Predictive Control." ICLR 2023 Workshops: RRL, 2023.

Markdown

[Jia et al. "Chain-of-Thought Predictive Control." ICLR 2023 Workshops: RRL, 2023.](https://mlanthology.org/iclrw/2023/jia2023iclrw-chainofthought/)

BibTeX

@inproceedings{jia2023iclrw-chainofthought,
  title     = {{Chain-of-Thought Predictive Control}},
  author    = {Jia, Zhiwei and Liu, Fangchen and Thumuluri, Vineet and Chen, Linghao and Huang, Zhiao and Su, Hao},
  booktitle = {ICLR 2023 Workshops: RRL},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/jia2023iclrw-chainofthought/}
}