Nyström Method for Accurate and Scalable Implicit Differentiation
Abstract
The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters. This paper proposes to tackle this problem by the Nyström method and the Woodbury matrix identity, exploiting the low-rankness of the Hessian. Compared to existing methods using iterative approximation, such as conjugate gradient and the Neumann series approximation, the proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations. As a result, the proposed method works stably in various tasks and is faster than iterative approximations. Throughout experiments including large-scale hyperparameter optimization and meta learning, we demonstrate that the Nyström method consistently achieves comparable or even superior performance to other approaches. The source code is available from https://github.com/moskomule/hypergrad.
Cite
Text
Hataya and Yamada. "Nyström Method for Accurate and Scalable Implicit Differentiation." Artificial Intelligence and Statistics, 2023.Markdown
[Hataya and Yamada. "Nyström Method for Accurate and Scalable Implicit Differentiation." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/hataya2023aistats-nystrom/)BibTeX
@inproceedings{hataya2023aistats-nystrom,
title = {{Nyström Method for Accurate and Scalable Implicit Differentiation}},
author = {Hataya, Ryuichiro and Yamada, Makoto},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {4643-4654},
volume = {206},
url = {https://mlanthology.org/aistats/2023/hataya2023aistats-nystrom/}
}