Path-SGD: Path-Normalized Optimization in Deep Neural Networks

Abstract

We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and AdaGrad.

Cite

Text

Neyshabur et al. "Path-SGD: Path-Normalized Optimization in Deep Neural Networks." Neural Information Processing Systems, 2015.

Markdown

[Neyshabur et al. "Path-SGD: Path-Normalized Optimization in Deep Neural Networks." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/neyshabur2015neurips-pathsgd/)

BibTeX

@inproceedings{neyshabur2015neurips-pathsgd,
  title     = {{Path-SGD: Path-Normalized Optimization in Deep Neural Networks}},
  author    = {Neyshabur, Behnam and Salakhutdinov, Ruslan and Srebro, Nati},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {2422-2430},
  url       = {https://mlanthology.org/neurips/2015/neyshabur2015neurips-pathsgd/}
}