BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Abstract
Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.Conventional BO methods need to differentiate through the low-level optimization process with implicit differentiation, which requires expensive calculations related to the Hessian matrix. There has been a recent quest for first-order methods for BO, but the methods proposed to date tend to be complicated and impractical for large-scale deep learning applications. In this work, we propose a simple first-order BO algorithm that depends only on first-order gradient information, requires no implicit differentiation, and is practical and efficient for large-scale non-convex functions in deep learning. We provide non-asymptotic convergence analysis of the proposed method to stationary points for non-convex objectives and present empirical results that show its superior practical performance.
Cite
Text
Liu et al. "BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach." Neural Information Processing Systems, 2022.Markdown
[Liu et al. "BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/liu2022neurips-bome/)BibTeX
@inproceedings{liu2022neurips-bome,
title = {{BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach}},
author = {Liu, Bo and Ye, Mao and Wright, Stephen and Stone, Peter and Liu, Qiang},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/liu2022neurips-bome/}
}