Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks
Abstract
Reinforcement learning (RL) is a learning paradigm in which an agent interacts with the environment it inhabits to learn in a trial-and-error way. By letting the agent acquire knowledge from its own experience, RL has been successfully applied to complex domains such as robotics. However, for non-trivial problems, training an RL agent can take very long periods of time. Lifelong machine learning (LML) is a learning setting in which the agent learns to solve tasks sequentially, by leveraging knowledge accumulated from previously solved tasks to learn better/faster in a new one. Most LML works heavily rely on the assumption that tasks are similar to each other. However, this may not be true for some domains with a high degree of task-diversity that could benefit from adopting a lifelong learning approach, e.g., service robotics. Therefore, in this research we will address the problem of learning to solve a sequence of RL heterogeneous tasks (i.e., tasks that differ in their state-action space).
Cite
Text
Serrano. "Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/689Markdown
[Serrano. "Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/serrano2021ijcai-inter/) doi:10.24963/IJCAI.2021/689BibTeX
@inproceedings{serrano2021ijcai-inter,
title = {{Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks}},
author = {Serrano, Sergio A.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {4915-4916},
doi = {10.24963/IJCAI.2021/689},
url = {https://mlanthology.org/ijcai/2021/serrano2021ijcai-inter/}
}