Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study
Abstract
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algorithm towards a certain structured solution that often generalizes well. Recently, several papers have studied implicit regularization and were able to identify this phenomenon in various scenarios.
Cite
Text
Dauber et al. "Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study." Neural Information Processing Systems, 2020.Markdown
[Dauber et al. "Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/dauber2020neurips-implicit/)BibTeX
@inproceedings{dauber2020neurips-implicit,
title = {{Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study}},
author = {Dauber, Assaf and Feder, Meir and Koren, Tomer and Livni, Roi},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/dauber2020neurips-implicit/}
}