On Feasible Rewards in Multi-Agent Inverse Reinforcement Learning
Abstract
Multi-agent inverse reinforcement learning (MAIRL) aims to recover agent reward functions from expert demonstrations. We characterize the feasible reward set in Markov games, identifying all reward functions that rationalize a given equilibrium. However, equilibrium-based observations are often ambiguous: a single Nash equilibrium can correspond to many reward structures, potentially changing the game's nature in multi-agent systems. We address this by introducing entropy-regularized Markov games, which yield a unique equilibrium while preserving strategic incentives. For this setting, we provide a sample complexity analysis detailing how errors affect learned policy performance. Our work establishes theoretical foundations and practical insights for MAIRL.
Cite
Text
Freihaut and Ramponi. "On Feasible Rewards in Multi-Agent Inverse Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Freihaut and Ramponi. "On Feasible Rewards in Multi-Agent Inverse Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/freihaut2025neurips-feasible/)BibTeX
@inproceedings{freihaut2025neurips-feasible,
title = {{On Feasible Rewards in Multi-Agent Inverse Reinforcement Learning}},
author = {Freihaut, Till and Ramponi, Giorgia},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/freihaut2025neurips-feasible/}
}