TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint)

Abstract

This paper investigates the use of model-based reinforcement learning in the context of ad hoc teamwork. We introduce a novel approach, named TEAMSTER, where we propose learning both the environment's model and the model of the teammates' behavior separately. Compared to the state-of-the-art PLASTIC algorithms, our results in four different domains from the multi-agent systems literature show that TEAMSTER is more flexible than the PLASTIC-Model, by learning the environment's model instead of assuming a perfect hand-coded model, and more robust/efficient than PLASTIC-Policy, by being able to continuously adapt to newly encountered teams, without implicitly learning a new environment model from scratch.

Cite

Text

Ribeiro et al. "TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I20.30608

Markdown

[Ribeiro et al. "TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/ribeiro2024aaai-teamster/) doi:10.1609/AAAI.V38I20.30608

BibTeX

@inproceedings{ribeiro2024aaai-teamster,
  title     = {{TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint)}},
  author    = {Ribeiro, João G. and Rodrigues, Gonçalo and Sardinha, Alberto and Melo, Francisco S.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {22708},
  doi       = {10.1609/AAAI.V38I20.30608},
  url       = {https://mlanthology.org/aaai/2024/ribeiro2024aaai-teamster/}
}