Test-Time Regret Minimization in Meta Reinforcement Learning

Abstract

Meta reinforcement learning sets a distribution over a set of tasks on which the agent can train at will, then is asked to learn an optimal policy for any test task efficiently. In this paper, we consider a finite set of tasks modeled through Markov decision processes with various dynamics. We assume to have endured a long training phase, from which the set of tasks is perfectly recovered, and we focus on regret minimization against the optimal policy in the unknown test task. Under a separation condition that states the existence of a state-action pair revealing a task against another, Chen et al. (2022) show that $O(M^2 \log(H))$ regret can be achieved, where $M, H$ are the number of tasks in the set and test episodes, respectively. In our first contribution, we demonstrate that the latter rate is nearly optimal by developing a novel lower bound for test-time regret minimization under separation, showing that a linear dependence with $M$ is unavoidable. Then, we present a family of stronger yet reasonable assumptions beyond separation, which we call strong identifiability, enabling algorithms achieving fast rates $\log (H)$ and sublinear dependence with $M$ simultaneously. Our paper provides a new understanding of the statistical barriers of test-time regret minimization and when fast rates can be achieved.

Cite

Text

Mutti and Tamar. "Test-Time Regret Minimization in Meta Reinforcement Learning." International Conference on Machine Learning, 2024.

Markdown

[Mutti and Tamar. "Test-Time Regret Minimization in Meta Reinforcement Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/mutti2024icml-testtime/)

BibTeX

@inproceedings{mutti2024icml-testtime,
  title     = {{Test-Time Regret Minimization in Meta Reinforcement Learning}},
  author    = {Mutti, Mirco and Tamar, Aviv},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {37016-37040},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/mutti2024icml-testtime/}
}