Bigger, Better, Faster: Human-Level Atari with Human-Level Efficiency
Abstract
We introduce a value-based RL agent, which we call BBF, that achieves super-human performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used for value estimation, as well as a number of other design choices that enable this scaling in a sample-efficient manner. We conduct extensive analyses of these design choices and provide insights for future work. We end with a discussion about updating the goalposts for sample-efficient RL research on the ALE. We make our code and data publicly available at https://github.com/google-research/google-research/tree/master/bigger_better_faster.
Cite
Text
Schwarzer et al. "Bigger, Better, Faster: Human-Level Atari with Human-Level Efficiency." International Conference on Machine Learning, 2023.Markdown
[Schwarzer et al. "Bigger, Better, Faster: Human-Level Atari with Human-Level Efficiency." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/schwarzer2023icml-bigger/)BibTeX
@inproceedings{schwarzer2023icml-bigger,
title = {{Bigger, Better, Faster: Human-Level Atari with Human-Level Efficiency}},
author = {Schwarzer, Max and Obando Ceron, Johan Samir and Courville, Aaron and Bellemare, Marc G and Agarwal, Rishabh and Castro, Pablo Samuel},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {30365-30380},
volume = {202},
url = {https://mlanthology.org/icml/2023/schwarzer2023icml-bigger/}
}