Relationship Between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent Using Armijo-Line-Search Learning Rate

Abstract

While stochastic gradient descent (SGD) can use various learning rates, such as constant or diminishing rates, previous numerical results showed that SGD performs better than other deep-learning optimizers when it uses learning rates given by line search methods. In this paper, we perform a convergence analysis on SGD with a learning rate given by an Armijo line search for nonconvex optimization indicating that the upper bound of the expectation of the squared norm of the full gradient becomes small when the number of steps and the batch size are large. Next, we show that, for SGD with the Armijo-line-search learning rate, the number of steps needed for nonconvex optimization is a monotone decreasing convex function of the batch size; that is, the number of steps needed for nonconvex optimization decreases as the batch size increases. Furthermore, we show that the stochastic first-order oracle (SFO) complexity, which is the stochastic gradient computation cost, is a convex function of the batch size; that is, there exists a critical batch size that minimizes the SFO complexity. Finally, we provide numerical results that support our theoretical results.

Cite

Text

Tsukada and Iiduka. "Relationship Between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent Using Armijo-Line-Search Learning Rate." Transactions on Machine Learning Research, 2025.

Markdown

[Tsukada and Iiduka. "Relationship Between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent Using Armijo-Line-Search Learning Rate." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/tsukada2025tmlr-relationship/)

BibTeX

@article{tsukada2025tmlr-relationship,
  title     = {{Relationship Between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent Using Armijo-Line-Search Learning Rate}},
  author    = {Tsukada, Yuki and Iiduka, Hideaki},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/tsukada2025tmlr-relationship/}
}