V4D: 4D Convolutional Neural Networks for Video-Level Representation Learning
Abstract
Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.
Cite
Text
Zhang et al. "V4D: 4D Convolutional Neural Networks for Video-Level Representation Learning." International Conference on Learning Representations, 2020.Markdown
[Zhang et al. "V4D: 4D Convolutional Neural Networks for Video-Level Representation Learning." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zhang2020iclr-v4d/)BibTeX
@inproceedings{zhang2020iclr-v4d,
title = {{V4D: 4D Convolutional Neural Networks for Video-Level Representation Learning}},
author = {Zhang, Shiwen and Guo, Sheng and Huang, Weilin and Scott, Matthew R. and Wang, Limin},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/zhang2020iclr-v4d/}
}