Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model

Abstract

Recent advancements in robot learning have used imitation learning with large models and extensive demonstrations to develop effective policies. However, these models are often limited by the quantity quality, and diversity of demonstrations. This paper explores improving offline-trained imitation learning models through online interactions with the environment. We introduce Policy Decorator, which uses a model-agnostic residual policy to refine large imitation learning models during online interactions. By implementing controlled exploration strategies, Policy Decorator enables stable, sample-efficient online learning. Our evaluation spans eight tasks across two benchmarks—ManiSkill and Adroit—and involves two state-of-the-art imitation learning models (Behavior Transformer and Diffusion Policy). The results show Policy Decorator effectively improves the offline-trained policies and preserves the smooth motion of imitation learning models, avoiding the erratic behaviors of pure RL policies. See our [project page](https://policydecorator.github.io/) for videos.

Cite

Text

Yuan et al. "Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model." International Conference on Learning Representations, 2025.

Markdown

[Yuan et al. "Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yuan2025iclr-policy/)

BibTeX

@inproceedings{yuan2025iclr-policy,
  title     = {{Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model}},
  author    = {Yuan, Xiu and Mu, Tongzhou and Tao, Stone and Fang, Yunhao and Zhang, Mengke and Su, Hao},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yuan2025iclr-policy/}
}