Heteroscedastic Temporal Variational Autoencoder for Irregularly Sampled Time Series

Abstract

Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in the output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recently proposed deep latent variable models that use homoscedastic output layers.

Cite

Text

Shukla and Marlin. "Heteroscedastic Temporal Variational Autoencoder for Irregularly Sampled Time Series." International Conference on Learning Representations, 2022.

Markdown

[Shukla and Marlin. "Heteroscedastic Temporal Variational Autoencoder for Irregularly Sampled Time Series." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/shukla2022iclr-heteroscedastic/)

BibTeX

@inproceedings{shukla2022iclr-heteroscedastic,
  title     = {{Heteroscedastic Temporal Variational Autoencoder for Irregularly Sampled Time Series}},
  author    = {Shukla, Satya Narayan and Marlin, Benjamin},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/shukla2022iclr-heteroscedastic/}
}