Training Neural Networks with Stochastic Hessian-Free Optimization
Abstract
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with gradient and curvature mini-batches independent of the dataset size. We modify Martens' HF for these settings and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. Stochastic Hessian-free optimization gives an intermediary between SGD and HF that achieves competitive performance on both classification and deep autoencoder experiments.
Cite
Text
Kiros. "Training Neural Networks with Stochastic Hessian-Free Optimization." International Conference on Learning Representations, 2013.Markdown
[Kiros. "Training Neural Networks with Stochastic Hessian-Free Optimization." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/kiros2013iclr-training/)BibTeX
@inproceedings{kiros2013iclr-training,
title = {{Training Neural Networks with Stochastic Hessian-Free Optimization}},
author = {Kiros, Ryan},
booktitle = {International Conference on Learning Representations},
year = {2013},
url = {https://mlanthology.org/iclr/2013/kiros2013iclr-training/}
}