Unsupervised Learning of Long-Term Motion Dynamics for Videos

Abstract

We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, depth, and RGB-D videos.

Cite

Text

Luo et al. "Unsupervised Learning of Long-Term Motion Dynamics for Videos." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.751

Markdown

[Luo et al. "Unsupervised Learning of Long-Term Motion Dynamics for Videos." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/luo2017cvpr-unsupervised/) doi:10.1109/CVPR.2017.751

BibTeX

@inproceedings{luo2017cvpr-unsupervised,
  title     = {{Unsupervised Learning of Long-Term Motion Dynamics for Videos}},
  author    = {Luo, Zelun and Peng, Boya and Huang, De-An and Alahi, Alexandre and Fei-Fei, Li},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.751},
  url       = {https://mlanthology.org/cvpr/2017/luo2017cvpr-unsupervised/}
}