Learning True Objectives: Linear Algebraic Characterizations of Identifiability in Inverse Reinforcement Learning
Abstract
Inverse reinforcement Learning (IRL) has emerged as a powerful paradigm for extracting expert skills from observed behavior, with applications ranging from autonomous systems to humanrobot interaction. However, the identifiability issue within IRL poses a significant challenge, as multiple reward functions can explain the same observed behavior. This paper provides a linear algebraic characterization of several identifiability notions for an entropy-regularized finite horizon Markov decision process (MDP). Moreover, our approach allows for the seamless integration of prior knowledge, in the form of featurized reward functions, to enhance the identifiability of IRL problems. The results are demonstrated with experiments on a grid world environment.
Cite
Text
Shehab et al. "Learning True Objectives: Linear Algebraic Characterizations of Identifiability in Inverse Reinforcement Learning." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.Markdown
[Shehab et al. "Learning True Objectives: Linear Algebraic Characterizations of Identifiability in Inverse Reinforcement Learning." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.](https://mlanthology.org/l4dc/2024/shehab2024l4dc-learning/)BibTeX
@inproceedings{shehab2024l4dc-learning,
title = {{Learning True Objectives: Linear Algebraic Characterizations of Identifiability in Inverse Reinforcement Learning}},
author = {Shehab, Mohamad Louai and Aspeel, Antoine and Arechiga, Nikos and Best, Andrew and Ozay, Necmiye},
booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference},
year = {2024},
pages = {1266-1277},
volume = {242},
url = {https://mlanthology.org/l4dc/2024/shehab2024l4dc-learning/}
}