Fisher-Rao Metric, Geometry, and Complexity of Neural Networks

Abstract

We study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint. We introduce a new notion of capacity — the Fisher-Rao norm — that possesses desirable invariance properties and is motivated by Information Geometry. We discover an analytical characterization of the new capacity measure, through which we establish norm-comparison inequalities and further show that the new measure serves as an umbrella for several existing norm-based complexity measures. We discuss upper bounds on the generalization error induced by the proposed measure. Extensive numerical experiments on CIFAR-10 support our theoretical findings. Our theoretical analysis rests on a key structural lemma about partial derivatives of multi-layer rectifier networks.

Cite

Text

Liang et al. "Fisher-Rao Metric, Geometry, and Complexity of Neural Networks." Artificial Intelligence and Statistics, 2019.

Markdown

[Liang et al. "Fisher-Rao Metric, Geometry, and Complexity of Neural Networks." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/liang2019aistats-fisherrao/)

BibTeX

@inproceedings{liang2019aistats-fisherrao,
  title     = {{Fisher-Rao Metric, Geometry, and Complexity of Neural Networks}},
  author    = {Liang, Tengyuan and Poggio, Tomaso and Rakhlin, Alexander and Stokes, James},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {888-896},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/liang2019aistats-fisherrao/}
}