On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation
Abstract
In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the objective functions are smooth but possibly nonconvex in both levels and the variables are restricted to closed convex sets. As a first step, we study the landscape of BO through the lens of penalty methods, in which the upper- and lower-level objectives are combined in a weighted sum with penalty parameter $\sigma > 0$. In particular, we establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be $O(\sigma)$-close. A by-product of our analysis is the explicit formula for the gradient of hyper-objective when the lower-level problem has multiple solutions under minimal conditions, which could be of independent interest. Next, viewing the penalty formulation as $O(\sigma)$-approximation of the original BO, we propose first-order algorithms that find an $\epsilon$-stationary solution by optimizing the penalty formulation with $\sigma = O(\epsilon)$. When the perturbed lower-level problem uniformly satisfies the {\it small-error} proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\epsilon$-stationary point of the penalty function using in total $O(\epsilon^{-7})$ accesses to first-order stochastic gradient oracles. Under an additional assumption on stochastic oracles, we show that the algorithm can be implemented in a fully {\it single-loop} manner, {\it i.e.,} with $O(1)$ samples per iteration, and achieves the improved oracle-complexity of $O(\epsilon^{-5})$.
Cite
Text
Kwon et al. "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation." International Conference on Learning Representations, 2024.Markdown
[Kwon et al. "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/kwon2024iclr-penalty/)BibTeX
@inproceedings{kwon2024iclr-penalty,
title = {{On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation}},
author = {Kwon, Jeongyeol and Kwon, Dohyun and Wright, Stephen and Nowak, Robert D},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/kwon2024iclr-penalty/}
}