Training Shapes the Curvature of Shallow Neural Network Representations
Abstract
We study how training shapes the Riemannian geometry induced by neural network feature maps. At infinite width, shallow neural networks induce highly symmetric metrics on input space. Feature learning in networks trained to perform simple classification tasks magnifies local areas and reduces curvature along decision boundaries. These changes are consistent with previously proposed geometric approaches for hand-tuning of kernel methods to improve generalization.
Cite
Text
Zavatone-Veth et al. "Training Shapes the Curvature of Shallow Neural Network Representations." NeurIPS 2022 Workshops: NeurReps, 2022.Markdown
[Zavatone-Veth et al. "Training Shapes the Curvature of Shallow Neural Network Representations." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/zavatoneveth2022neuripsw-training/)BibTeX
@inproceedings{zavatoneveth2022neuripsw-training,
title = {{Training Shapes the Curvature of Shallow Neural Network Representations}},
author = {Zavatone-Veth, Jacob A and Rubinfien, Julian Alex and Pehlevan, Cengiz},
booktitle = {NeurIPS 2022 Workshops: NeurReps},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/zavatoneveth2022neuripsw-training/}
}