Learning Temporal Embeddings for Complex Video Analysis

Abstract

In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.

Cite

Text

Ramanathan et al. "Learning Temporal Embeddings for Complex Video Analysis." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.508

Markdown

[Ramanathan et al. "Learning Temporal Embeddings for Complex Video Analysis." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/ramanathan2015iccv-learning/) doi:10.1109/ICCV.2015.508

BibTeX

@inproceedings{ramanathan2015iccv-learning,
  title     = {{Learning Temporal Embeddings for Complex Video Analysis}},
  author    = {Ramanathan, Vignesh and Tang, Kevin and Mori, Greg and Fei-Fei, Li},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.508},
  url       = {https://mlanthology.org/iccv/2015/ramanathan2015iccv-learning/}
}