Rethinking Inverse Reinforcement Learning: From Data Alignment to Task Alignment

Abstract

Many imitation learning (IL) algorithms use inverse reinforcement learning (IRL) to infer a reward function that aligns with the demonstration.However, the inferred reward functions often fail to capture the underlying task objectives.In this paper, we propose a novel framework for IRL-based IL that prioritizes task alignment over conventional data alignment. Our framework is a semi-supervised approach that leverages expert demonstrations as weak supervision to derive a set of candidate reward functions that align with the task rather than only with the data. It then adopts an adversarial mechanism to train a policy with this set of reward functions to gain a collective validation of the policy's ability to accomplish the task. We provide theoretical insights into this framework's ability to mitigate task-reward misalignment and present a practical implementation. Our experimental results show that our framework outperforms conventional IL baselines in complex and transfer learning scenarios.

Cite

Text

Zhou and Li. "Rethinking Inverse Reinforcement Learning: From Data Alignment to Task Alignment." Neural Information Processing Systems, 2024. doi:10.52202/079017-0869

Markdown

[Zhou and Li. "Rethinking Inverse Reinforcement Learning: From Data Alignment to Task Alignment." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhou2024neurips-rethinking/) doi:10.52202/079017-0869

BibTeX

@inproceedings{zhou2024neurips-rethinking,
  title     = {{Rethinking Inverse Reinforcement Learning: From Data Alignment to Task Alignment}},
  author    = {Zhou, Weichao and Li, Wenchao},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0869},
  url       = {https://mlanthology.org/neurips/2024/zhou2024neurips-rethinking/}
}