A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning

Abstract

Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP). This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. A geometric construction of $\beta$-strict separable IRL problems using spherical codes is considered. Properties of the ensemble size as well as the Kullback-Leibler divergence between the generated trajectories are derived. The resulting ensemble is then used along with Fano’s inequality to derive a sample complexity lower bound of $O(n \log n)$, where $n$ is the number of states in the MDP.

Cite

Text

Komanduru and Honorio. "A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning." International Conference on Machine Learning, 2021.

Markdown

[Komanduru and Honorio. "A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/komanduru2021icml-lower/)

BibTeX

@inproceedings{komanduru2021icml-lower,
  title     = {{A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning}},
  author    = {Komanduru, Abi and Honorio, Jean},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5676-5685},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/komanduru2021icml-lower/}
}