Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition

Abstract

The goal of the AlgoPerf: Training Algorithms competition is to evaluate practical speed-ups in neural network training achieved solely by improving the underlying training algorithms. In the external tuning ruleset, submissions must provide workload-agnostic hyperparameter search spaces, while in the self-tuning ruleset they must be completely hyperparameter-free. In both rulesets, submissions are compared on time-to-result across multiple deep learning workloads, training on fixed hardware. This paper presents the inaugural AlgoPerf competition's results, which drew 18 diverse submissions from 10 teams. Our investigation reveals several key findings: (1) The winning submission in the external tuning ruleset, using Distributed Shampoo, demonstrates the effectiveness of non-diagonal preconditioning over popular methods like Adam, even when compared on wall-clock runtime. (2) The winning submission in the self-tuning ruleset, based on the Schedule Free AdamW algorithm, demonstrates a new level of effectiveness for completely hyperparameter-free training algorithms. (3) The top-scoring submissions were surprisingly robust to workload changes. We also discuss the engineering challenges encountered in ensuring a fair comparison between different training algorithms. These results highlight both the significant progress so far, and the considerable room for further improvements.

Cite

Text

Kasimbeg et al. "Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition." International Conference on Learning Representations, 2025.

Markdown

[Kasimbeg et al. "Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/kasimbeg2025iclr-accelerating/)

BibTeX

@inproceedings{kasimbeg2025iclr-accelerating,
  title     = {{Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition}},
  author    = {Kasimbeg, Priya and Schneider, Frank and Eschenhagen, Runa and Bae, Juhan and Sastry, Chandramouli Shama and Saroufim, Mark and Feng, Boyuan and Wright, Less and Yang, Edward Z. and Nado, Zachary and Medapati, Sourabh and Hennig, Philipp and Rabbat, Michael and Dahl, George E.},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/kasimbeg2025iclr-accelerating/}
}