Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks

Abstract

Multidimensional recurrent neural networks (MDRNNs) have shown a remarkable performance in the area of speech and handwriting recognition. The performance of an MDRNN is improved by further increasing its depth, and the difficulty of learning the deeper network is overcome by using Hessian-free (HF) optimization. Given that connectionist temporal classification (CTC) is utilized as an objective of learning an MDRNN for sequence labeling, the non-convexity of CTC poses a problem when applying HF to the network. As a solution, a convex approximation of CTC is formulated and its relationship with the EM algorithm and the Fisher information matrix is discussed. An MDRNN up to a depth of 15 layers is successfully trained using HF, resulting in an improved performance for sequence labeling.

Cite

Text

Cho et al. "Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks." Neural Information Processing Systems, 2015.

Markdown

[Cho et al. "Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/cho2015neurips-hessianfree/)

BibTeX

@inproceedings{cho2015neurips-hessianfree,
  title     = {{Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks}},
  author    = {Cho, Minhyung and Dhir, Chandra and Lee, Jaehyung},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {883-891},
  url       = {https://mlanthology.org/neurips/2015/cho2015neurips-hessianfree/}
}