Sample Complexity of Multi-Task Reinforcement Learning
Abstract
Transferring knowledge across a sequence of reinforcement-learning tasks is challenging, and has a number of important applications. Though there is encouraging empirical evidence that transfer can improve performance in subsequent reinforcement-learning tasks, there has been very little theoretical analysis. In this paper, we introduce a new multi-task algorithm for a sequence of reinforcement-learning tasks when each task is sampled independently from (an unknown) distribution over a finite set of Markov decision processes whose parameters are initially unknown. For this setting, we prove under certain assumptions that the per-task sample complexity of exploration is reduced significantly due to transfer compared to standard single-task algorithms. Our multi-task algorithm also has the desired characteristic that it is guaranteed not to exhibit negative transfer: in the worst case its per-task sample complexity is comparable to the corresponding single-task algorithm.
Cite
Text
Brunskill and Li. "Sample Complexity of Multi-Task Reinforcement Learning." Conference on Uncertainty in Artificial Intelligence, 2013.Markdown
[Brunskill and Li. "Sample Complexity of Multi-Task Reinforcement Learning." Conference on Uncertainty in Artificial Intelligence, 2013.](https://mlanthology.org/uai/2013/brunskill2013uai-sample/)BibTeX
@inproceedings{brunskill2013uai-sample,
title = {{Sample Complexity of Multi-Task Reinforcement Learning}},
author = {Brunskill, Emma and Li, Lihong},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2013},
url = {https://mlanthology.org/uai/2013/brunskill2013uai-sample/}
}