From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality
Abstract
Optimization and generalization are two essential aspects of statistical machine learning. In this paper, we propose a framework to connect optimization with generalization by analyz- ing the generalization error based on the optimization trajectory under the gradient flow algorithm. The key ingredient of this framework is the Uniform-LGI, a property that is generally satisfied when training machine learning models. Leveraging the Uniform-LGI, we first derive convergence rates for gradient flow algorithm, then we give generalization bounds for a large class of machine learning models. We further apply our framework to three distinct machine learning models: linear regression, kernel regression, and two-layer neural networks. Through our approach, we obtain generalization estimates that match or extend previous results.
Cite
Text
Liu et al. "From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality." Transactions on Machine Learning Research, 2022.Markdown
[Liu et al. "From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/liu2022tmlr-optimization/)BibTeX
@article{liu2022tmlr-optimization,
title = {{From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality}},
author = {Liu, Fusheng and Yang, Haizhao and Hayou, Soufiane and Li, Qianxiao},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/liu2022tmlr-optimization/}
}