Learning to Steer Markovian Agents Under Model Uncertainty

Abstract

Designing incentives for an adapting population is a ubiquitous problem in a wide array of economic applications and beyond. In this work, we study how to design additional rewards to steer multi-agent systems towards desired policies \emph{without} prior knowledge of the agents' underlying learning dynamics. We introduce a model-based non-episodic Reinforcement Learning (RL) formulation for our steering problem. Importantly, we focus on learning a \emph{history-dependent} steering strategy to handle the inherent model uncertainty about the agents' learning dynamics. We introduce a novel objective function to encode the desiderata of achieving a good steering outcome with reasonable cost. Theoretically, we identify conditions for the existence of steering strategies to guide agents to the desired policies. Complementing our theoretical contributions, we provide empirical algorithms to approximately solve our objective, which effectively tackles the challenge in learning history-dependent strategies. We demonstrate the efficacy of our algorithms through empirical evaluations.

Cite

Text

Huang et al. "Learning to Steer Markovian Agents Under Model Uncertainty." ICML 2024 Workshops: ARLET, 2024.

Markdown

[Huang et al. "Learning to Steer Markovian Agents Under Model Uncertainty." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/huang2024icmlw-learning/)

BibTeX

@inproceedings{huang2024icmlw-learning,
  title     = {{Learning to Steer Markovian Agents Under Model Uncertainty}},
  author    = {Huang, Jiawei and Thoma, Vinzenz and Shen, Zebang and Nax, Heinrich H. and He, Niao},
  booktitle = {ICML 2024 Workshops: ARLET},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/huang2024icmlw-learning/}
}