Implicit Regularization via Neural Feature Alignment
Abstract
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment ofthe neural tangent features introduced by Jacot et al. (2018), along a small number of task-relevant directions. This can be interpreted as a combined mechanism of feature selection and compression. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we motivate and study a heuristic complexity measure that captures this phenomenon, in terms of sequences of tangent kernel classes along optimization paths. The code for our experiments is available as https://github.com/tfjgeorge/ntk_alignment.
Cite
Text
Baratin et al. "Implicit Regularization via Neural Feature Alignment." Artificial Intelligence and Statistics, 2021.Markdown
[Baratin et al. "Implicit Regularization via Neural Feature Alignment." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/baratin2021aistats-implicit/)BibTeX
@inproceedings{baratin2021aistats-implicit,
title = {{Implicit Regularization via Neural Feature Alignment}},
author = {Baratin, Aristide and George, Thomas and Laurent, César and Devon Hjelm, R and Lajoie, Guillaume and Vincent, Pascal and Lacoste-Julien, Simon},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {2269-2277},
volume = {130},
url = {https://mlanthology.org/aistats/2021/baratin2021aistats-implicit/}
}