Variational Recurrent Adversarial Deep Domain Adaptation
Abstract
We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies in multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
Cite
Text
Purushotham et al. "Variational Recurrent Adversarial Deep Domain Adaptation." International Conference on Learning Representations, 2017.Markdown
[Purushotham et al. "Variational Recurrent Adversarial Deep Domain Adaptation." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/purushotham2017iclr-variational/)BibTeX
@inproceedings{purushotham2017iclr-variational,
title = {{Variational Recurrent Adversarial Deep Domain Adaptation}},
author = {Purushotham, Sanjay and Carvalho, Wilka and Nilanon, Tanachat and Liu, Yan},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/purushotham2017iclr-variational/}
}