Learning Dynamic Generator Model by Alternating Back-Propagation Through Time
Abstract
This paper studies the dynamic generator model for spatialtemporal processes such as dynamic textures and action sequences in video data. In this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. The sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. The non-linear transformation of this transition model can be parametrized by a feedforward neural network. We show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. We show that our training method can learn realistic models for dynamic textures and action patterns.
Cite
Text
Xie et al. "Learning Dynamic Generator Model by Alternating Back-Propagation Through Time." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015498Markdown
[Xie et al. "Learning Dynamic Generator Model by Alternating Back-Propagation Through Time." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/xie2019aaai-learning/) doi:10.1609/AAAI.V33I01.33015498BibTeX
@inproceedings{xie2019aaai-learning,
title = {{Learning Dynamic Generator Model by Alternating Back-Propagation Through Time}},
author = {Xie, Jianwen and Gao, Ruiqi and Zheng, Zilong and Zhu, Song-Chun and Wu, Ying Nian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {5498-5507},
doi = {10.1609/AAAI.V33I01.33015498},
url = {https://mlanthology.org/aaai/2019/xie2019aaai-learning/}
}