Variational Recurrent Auto-Encoders
Abstract
In this paper we propose a model that combines the strengths of RNNs and SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used for efficient, large scale unsupervised learning on time series data, mapping the time series data to a latent vector representation. The model is generative, such that data can be generated from samples of the latent space. An important contribution of this work is that the model can make use of unlabeled data in order to facilitate supervised training of RNNs by initialising the weights and network state.
Cite
Text
Fabius et al. "Variational Recurrent Auto-Encoders." International Conference on Learning Representations, 2015.Markdown
[Fabius et al. "Variational Recurrent Auto-Encoders." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/fabius2015iclr-variational/)BibTeX
@inproceedings{fabius2015iclr-variational,
title = {{Variational Recurrent Auto-Encoders}},
author = {Fabius, Otto and van Amersfoort, Joost R. and Kingma, Diederik P.},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/fabius2015iclr-variational/}
}