Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks

Abstract

The Gauss-Newton (GN) matrix plays an important role in machine learning, most evident in its use as a preconditioning matrix for a wide family of popular adaptive methods to speed up optimization. Besides, it can also provide key insights into the optimization landscape of neural networks. In the context of deep neural networks, understanding the GN matrix involves studying the interaction between different weight matrices as well as the dependencies introduced by the data, thus rendering its analysis challenging.In this work, we take a first step towards theoretically characterizing the conditioning of the GN matrix in neural networks. We establish tight bounds on the condition number of the GN in deep linear networks of arbitrary depth and width, which we also extend to two-layer ReLU networks.We expand the analysis to further architectural components, such as residual connections and convolutional layers. Finally, we empirically validate the bounds and uncover valuable insights into the influence of the analyzed architectural components.

Cite

Text

Zhao et al. "Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks." Neural Information Processing Systems, 2024. doi:10.52202/079017-3650

Markdown

[Zhao et al. "Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhao2024neurips-theoretical/) doi:10.52202/079017-3650

BibTeX

@inproceedings{zhao2024neurips-theoretical,
  title     = {{Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks}},
  author    = {Zhao, Jim and Singh, Sidak Pal and Lucchi, Aurelien},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3650},
  url       = {https://mlanthology.org/neurips/2024/zhao2024neurips-theoretical/}
}