Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling
Abstract
Multi-task reinforcement learning (RL) faces the significant challenge of varying task difficulties, often leading to negative transfer when simpler tasks overshadow the learning of more complex ones. To overcome this challenge, we propose a novel algorithm, Scheduled Multi-Task Training (SMT), that strategically prioritizes more challenging tasks, thereby enhancing overall learning efficiency. SMT introduces a dynamic task prioritization strategy, underpinned by an effective metric for assessing task difficulty. This metric ensures an efficient and targeted allocation of training resources, significantly improving learning outcomes. Additionally, SMT incorporates a reset mechanism that periodically reinitializes key network parameters to mitigate the simplicity bias, further enhancing the adaptability and robustness of the learning process across diverse tasks. The efficacy of SMT’s scheduling method is validated by significantly improving performance on challenging Meta-World benchmarks.
Cite
Text
Cho et al. "Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling." International Conference on Machine Learning, 2024.Markdown
[Cho et al. "Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cho2024icml-hard/)BibTeX
@inproceedings{cho2024icml-hard,
title = {{Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling}},
author = {Cho, Myungsik and Park, Jongeui and Lee, Suyoung and Sung, Youngchul},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {8556-8577},
volume = {235},
url = {https://mlanthology.org/icml/2024/cho2024icml-hard/}
}