Achieving $\mathcal{O}(\epsilon^{-1.5})$ Complexity in Hessian/Jacobian-Free Stochastic Bilevel Optimization

Abstract

In this paper, we revisit the bilevel optimization problem, in which the upper-level objective function is generally nonconvex and the lower-level objective function is strongly convex. Although this type of problem has been studied extensively, it still remains an open question how to achieve an $\mathcal{O}(\epsilon^{-1.5})$ sample complexity in Hessian/Jacobian-free stochastic bilevel optimization without any second-order derivative computation. To fill this gap, we propose a novel Hessian/Jacobian-free bilevel optimizer named FdeHBO, which features a simple fully single-loop structure, a projection-aided finite-difference Hessian/Jacobian-vector approximation, and momentum-based updates. Theoretically, we show that FdeHBO requires $\mathcal{O}(\epsilon^{-1.5})$ iterations (each using $\mathcal{O}(1)$ samples and only first-order gradient information) to find an $\epsilon$-accurate stationary point. As far as we know, this is the first Hessian/Jacobian-free method with an $\mathcal{O}(\epsilon^{-1.5})$ sample complexity for nonconvex-strongly-convex stochastic bilevel optimization.

Cite

Text

Yang et al. "Achieving $\mathcal{O}(\epsilon^{-1.5})$ Complexity in Hessian/Jacobian-Free Stochastic Bilevel Optimization." Neural Information Processing Systems, 2023.

Markdown

[Yang et al. "Achieving $\mathcal{O}(\epsilon^{-1.5})$ Complexity in Hessian/Jacobian-Free Stochastic Bilevel Optimization." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/yang2023neurips-achieving/)

BibTeX

@inproceedings{yang2023neurips-achieving,
  title     = {{Achieving $\mathcal{O}(\epsilon^{-1.5})$ Complexity in Hessian/Jacobian-Free Stochastic Bilevel Optimization}},
  author    = {Yang, Yifan and Xiao, Peiyao and Ji, Kaiyi},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/yang2023neurips-achieving/}
}