Bigger, Regularized, Optimistic: Scaling for Compute and Sample-Efficient Continuous Control

Abstract

Sample efficiency in Reinforcement Learning (RL) has traditionally been driven by algorithmic enhancements. In this work, we demonstrate that scaling can also lead to substantial improvements. We conduct a thorough investigation into the interplay of scaling model capacity and domain-specific RL enhancements. These empirical findings inform the design choices underlying our proposed BRO (Bigger, Regularized, Optimistic) algorithm. The key innovation behind BRO is that strong regularization allows for effective scaling of the critic networks, which, paired with optimistic exploration, leads to superior performance. BRO achieves state-of-the-art results, significantly outperforming the leading model-based and model-free algorithms across 40 complex tasks from the DeepMind Control, MetaWorld, and MyoSuite benchmarks. BRO is the first model-free algorithm to achieve near-optimal policies in the notoriously challenging Dog and Humanoid tasks.

Cite

Text

Nauman et al. "Bigger, Regularized, Optimistic: Scaling for Compute and Sample-Efficient Continuous Control." ICML 2024 Workshops: ARLET, 2024.

Markdown

[Nauman et al. "Bigger, Regularized, Optimistic: Scaling for Compute and Sample-Efficient Continuous Control." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/nauman2024icmlw-bigger/)

BibTeX

@inproceedings{nauman2024icmlw-bigger,
  title     = {{Bigger, Regularized, Optimistic: Scaling for Compute and Sample-Efficient Continuous Control}},
  author    = {Nauman, Michal and Ostaszewski, Mateusz and Jankowski, Krzysztof and Miłoś, Piotr and Cygan, Marek},
  booktitle = {ICML 2024 Workshops: ARLET},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/nauman2024icmlw-bigger/}
}