Learning Multilevel Distributed Representations for High-Dimensional Sequences
Abstract
We describe a new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical systems. Our models have simple approximate inference and learning procedures that work well in practice. Multilevel representations of sequential data can be learned one hidden layer at a time, and adding extra hidden layers improves the resulting generative models. The models can be trained with very high-dimensional, very non-linear data such as raw pixel sequences. Their performance is demonstrated using synthetic video sequences of two balls bouncing in a box.
Cite
Text
Sutskever and Hinton. "Learning Multilevel Distributed Representations for High-Dimensional Sequences." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.Markdown
[Sutskever and Hinton. "Learning Multilevel Distributed Representations for High-Dimensional Sequences." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/sutskever2007aistats-learning/)BibTeX
@inproceedings{sutskever2007aistats-learning,
title = {{Learning Multilevel Distributed Representations for High-Dimensional Sequences}},
author = {Sutskever, Ilya and Hinton, Geoffrey},
booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
year = {2007},
pages = {548-555},
volume = {2},
url = {https://mlanthology.org/aistats/2007/sutskever2007aistats-learning/}
}