Off-Belief Learning
Abstract
The standard problem setting in Dec-POMDPs is self-play, where the goal is to find a set of policies that play optimally together. Policies learned through self-play may adopt arbitrary conventions and implicitly rely on multi-step reasoning based on fragile assumptions about other agents’ actions and thus fail when paired with humans or independently trained agents at test time. To address this, we present off-belief learning (OBL). At each timestep OBL agents follow a policy $\pi_1$ that is optimized assuming past actions were taken by a given, fixed policy ($\pi_0$), but assuming that future actions will be taken by $\pi_1$. When $\pi_0$ is uniform random, OBL converges to an optimal policy that does not rely on inferences based on other agents’ behavior (an optimal grounded policy). OBL can be iterated in a hierarchy, where the optimal policy from one level becomes the input to the next, thereby introducing multi-level cognitive reasoning in a controlled manner. Unlike existing approaches, which may converge to any equilibrium policy, OBL converges to a unique policy, making it suitable for zero-shot coordination (ZSC). OBL can be scaled to high-dimensional settings with a fictitious transition mechanism and shows strong performance in both a toy-setting and the benchmark human-AI & ZSC problem Hanabi.
Cite
Text
Hu et al. "Off-Belief Learning." International Conference on Machine Learning, 2021.Markdown
[Hu et al. "Off-Belief Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/hu2021icml-offbelief/)BibTeX
@inproceedings{hu2021icml-offbelief,
title = {{Off-Belief Learning}},
author = {Hu, Hengyuan and Lerer, Adam and Cui, Brandon and Pineda, Luis and Brown, Noam and Foerster, Jakob},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {4369-4379},
volume = {139},
url = {https://mlanthology.org/icml/2021/hu2021icml-offbelief/}
}