Multi-World Model in Continual Reinforcement Learning
Abstract
World Models are made of generative networks that can predict future states of a single environment which it was trained on. This research proposes a Multi-world Model, a foundational model built from World Models for the field of continual reinforcement learning that is trained on many different environments, enabling it to generalize state sequence predictions even for unseen settings.
Cite
Text
Shen. "Multi-World Model in Continual Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30555Markdown
[Shen. "Multi-World Model in Continual Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/shen2024aaai-multi/) doi:10.1609/AAAI.V38I21.30555BibTeX
@inproceedings{shen2024aaai-multi,
title = {{Multi-World Model in Continual Reinforcement Learning}},
author = {Shen, Kevin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {23757-23759},
doi = {10.1609/AAAI.V38I21.30555},
url = {https://mlanthology.org/aaai/2024/shen2024aaai-multi/}
}