Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-Training
Abstract
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization. However, previous ODL approaches regard the training and hyper-training procedures as two separated stages, meaning that the hyper-training variables have to be fixed during the training process, and thus it is also impossible to simultaneously obtain the convergence of training and hyper-training variables. In this work, we design a Generalized Krasnoselskii-Mann (GKM) scheme based on fixed-point iterations as our fundamental ODL module, which unifies existing ODL methods as special cases. Under the GKM scheme, a Bilevel Meta Optimization (BMO) algorithmic framework is constructed to solve the optimal training and hyper-training variables together. We rigorously prove the essential joint convergence of the fixed-point iteration for training and the process of optimizing hyper-parameters for hyper-training, both on the approximation quality, and on the stationary analysis. Experiments demonstrate the efficiency of BMO with competitive performance on sparse coding and real-world applications such as image deconvolution and rain streak removal.
Cite
Text
Liu et al. "Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-Training." International Conference on Machine Learning, 2022.Markdown
[Liu et al. "Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-Training." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/liu2022icml-optimizationderived/)BibTeX
@inproceedings{liu2022icml-optimizationderived,
title = {{Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-Training}},
author = {Liu, Risheng and Liu, Xuan and Zeng, Shangzhi and Zhang, Jin and Zhang, Yixuan},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {13825-13856},
volume = {162},
url = {https://mlanthology.org/icml/2022/liu2022icml-optimizationderived/}
}