Implicit Regularization via Neural Feature Alignment
Abstract
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al (2018), along a small number of task-relevant directions. This can be interpreted as a combined feature selection and compression mechanism. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we propose and study a new heuristic measure of complexity which captures this phenomenon, in terms of sequences of tangent kernel classes along the learning trajectories.
Cite
Text
Baratin et al. "Implicit Regularization via Neural Feature Alignment." NeurIPS 2020 Workshops: DL-IG, 2020.Markdown
[Baratin et al. "Implicit Regularization via Neural Feature Alignment." NeurIPS 2020 Workshops: DL-IG, 2020.](https://mlanthology.org/neuripsw/2020/baratin2020neuripsw-implicit/)BibTeX
@inproceedings{baratin2020neuripsw-implicit,
title = {{Implicit Regularization via Neural Feature Alignment}},
author = {Baratin, Aristide and George, Thomas and Laurent, César and Hjelm, R Devon and Lajoie, Guillaume and Vincent, Pascal and Lacoste-Julien, Simon},
booktitle = {NeurIPS 2020 Workshops: DL-IG},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/baratin2020neuripsw-implicit/}
}