Accelerating Optimization via Differentiable Stopping Time

Abstract

A common approach for accelerating optimization algorithms is to minimize the loss achieved in a fixed time, which enables a differentiable framework with respect to the algorithm's hyperparameters. In contrast, the complementary objective of minimizing the time to reach a target loss is traditionally considered non-differentiable. To address this limitation, we propose a differentiable discrete stopping time and theoretically justify it based on its connection to continuous differential equations. We design an efficient algorithm to compute its sensitivities, thereby enabling a new differentiable formulation for directly accelerating algorithms. We demonstrate its effectiveness in applications such as online hyperparameter tuning and learning to optimize. Our proposed methods show superior performance in comprehensive experiments across various problems, which confirms their effectiveness.

Cite

Text

Xie et al. "Accelerating Optimization via Differentiable Stopping Time." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xie et al. "Accelerating Optimization via Differentiable Stopping Time." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xie2025neurips-accelerating/)

BibTeX

@inproceedings{xie2025neurips-accelerating,
  title     = {{Accelerating Optimization via Differentiable Stopping Time}},
  author    = {Xie, Zhonglin and Fong, Yiman and Yuan, Haoran and Wen, Zaiwen},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xie2025neurips-accelerating/}
}