A Smooth Optimisation Perspective on Training Feedforward Neural Networks

Abstract

We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties.

Cite

Text

Shen. "A Smooth Optimisation Perspective on Training Feedforward Neural Networks." International Conference on Learning Representations, 2017.

Markdown

[Shen. "A Smooth Optimisation Perspective on Training Feedforward Neural Networks." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/shen2017iclr-smooth/)

BibTeX

@inproceedings{shen2017iclr-smooth,
  title     = {{A Smooth Optimisation Perspective on Training Feedforward Neural Networks}},
  author    = {Shen, Hao},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/shen2017iclr-smooth/}
}