Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Abstract
Stochastic optimization methods are widely used for training of deep neural networks. However, it is still a challenging research problem to achieve effective training by using stochastic optimization methods. This is due to the difficulties in finding good parameters on a loss function that have many saddle points. In this paper, we propose a stochastic optimization method called STDProp for effective training of deep neural networks. Its key idea is to effectively explore parameters on a complex surface of a loss function. We additionally develop momentum version of STDProp. While our approaches are easy to implement with high memory efficiency, it is more effective than other practical stochastic optimization methods for deep neural networks.
Cite
Text
Ida et al. "Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/267Markdown
[Ida et al. "Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/ida2017ijcai-adaptive/) doi:10.24963/IJCAI.2017/267BibTeX
@inproceedings{ida2017ijcai-adaptive,
title = {{Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks}},
author = {Ida, Yasutoshi and Fujiwara, Yasuhiro and Iwamura, Sotetsu},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {1923-1929},
doi = {10.24963/IJCAI.2017/267},
url = {https://mlanthology.org/ijcai/2017/ida2017ijcai-adaptive/}
}