Event-Based Federated Q-Learning
Abstract
This paper introduces an event-based communication mechanism in federated Q-learning algorithms, enhancing convergence and reducing communication overhead. We present a communication scheme, which leverages event-based communication to update Q-tables between agents and a central server. Through theoretical analysis and empirical evaluation, we demonstrate the convergence properties of event-based QAvg, highlighting its effectiveness in federated reinforcement learning settings.
Cite
Text
Er and Muehlebach. "Event-Based Federated Q-Learning." ICML 2024 Workshops: RLControlTheory, 2024.Markdown
[Er and Muehlebach. "Event-Based Federated Q-Learning." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/er2024icmlw-eventbased/)BibTeX
@inproceedings{er2024icmlw-eventbased,
title = {{Event-Based Federated Q-Learning}},
author = {Er, Guner Dilsad and Muehlebach, Michael},
booktitle = {ICML 2024 Workshops: RLControlTheory},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/er2024icmlw-eventbased/}
}