Sparsifying Networks by Traversing Geodesics
Abstract
The geometry of weight spaces and functional manifolds of neural networks play an important role towards 'understanding' the intricacies of ML. In this paper, we attempt to solve certain open questions in ML, by viewing them through the lens of geometry, ultimately relating it to the discovery of points or paths of equivalent function in these spaces. We propose a mathematical framework to evaluate geodesics in the functional space, to find high-performance paths from a dense network to its sparser counterpart. Our results are obtained on VGG-11 trained on CIFAR-10 and MLP's trained on MNIST. Broadly, we demonstrate that the geodesic framework is general, and can be applied to a wide variety of problems, ranging from sparsification to alleviating catastrophic forgetting.
Cite
Text
Raghavan and Thomson. "Sparsifying Networks by Traversing Geodesics." NeurIPS 2020 Workshops: DL-IG, 2020.Markdown
[Raghavan and Thomson. "Sparsifying Networks by Traversing Geodesics." NeurIPS 2020 Workshops: DL-IG, 2020.](https://mlanthology.org/neuripsw/2020/raghavan2020neuripsw-sparsifying/)BibTeX
@inproceedings{raghavan2020neuripsw-sparsifying,
title = {{Sparsifying Networks by Traversing Geodesics}},
author = {Raghavan, Guruprasad and Thomson, Matt},
booktitle = {NeurIPS 2020 Workshops: DL-IG},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/raghavan2020neuripsw-sparsifying/}
}