Forward-Backward Retraining of Recurrent Neural Networks
Abstract
This paper describes the training of a recurrent neural network as the letter posterior probability estimator for a hidden Markov model, off-line handwriting recognition system. The network esti(cid:173) mates posterior distributions for each of a series of frames repre(cid:173) senting sections of a handwritten word. The supervised training algorithm, backpropagation through time, requires target outputs to be provided for each frame. Three methods for deriving these targets are presented. A novel method based upon the forward(cid:173) backward algorithm is found to result in the recognizer with the lowest error rate.
Cite
Text
Senior and Robinson. "Forward-Backward Retraining of Recurrent Neural Networks." Neural Information Processing Systems, 1995.Markdown
[Senior and Robinson. "Forward-Backward Retraining of Recurrent Neural Networks." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/senior1995neurips-forwardbackward/)BibTeX
@inproceedings{senior1995neurips-forwardbackward,
title = {{Forward-Backward Retraining of Recurrent Neural Networks}},
author = {Senior, Andrew W. and Robinson, Anthony J.},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {743-749},
url = {https://mlanthology.org/neurips/1995/senior1995neurips-forwardbackward/}
}