Truthful Self-Play
Abstract
We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of self-play inspired by mechanism design, also known as {\em reverse game theory}, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework.
Cite
Text
Ohsawa. "Truthful Self-Play." International Conference on Learning Representations, 2023.Markdown
[Ohsawa. "Truthful Self-Play." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/ohsawa2023iclr-truthful/)BibTeX
@inproceedings{ohsawa2023iclr-truthful,
title = {{Truthful Self-Play}},
author = {Ohsawa, Shohei},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/ohsawa2023iclr-truthful/}
}