The Best of BothWorlds: Combining Data-Independent and Data-Driven Approaches for Action Recognition
Abstract
Motivated by the success of CNNs in object recognition on images, researchers are striving to develop CNN equivalents for learning video features. However, learning video features globally has proven to be quite a challenge due to the difficulty of getting enough labels, processing large-scale video data, and representing motion information. Therefore, we propose to leverage effective techniques from both data-driven and data-independent approaches to improve action recognition system. Our contribution is three-fold. First, we explicitly show that local handcrafted features and CNNs share the same convolution-pooling network structure. Second, we propose to use independent subspace analysis (ISA) to learn descriptors for state-of-the-art handcrafted features. Third, we enhance ISA with two new improvements, which make our learned descriptors significantly outperform the handcrafted ones. Experimental results on standard action recognition benchmarks show competitive performance.
Cite
Text
Lan et al. "The Best of BothWorlds: Combining Data-Independent and Data-Driven Approaches for Action Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016. doi:10.1109/CVPRW.2016.152Markdown
[Lan et al. "The Best of BothWorlds: Combining Data-Independent and Data-Driven Approaches for Action Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016.](https://mlanthology.org/cvprw/2016/lan2016cvprw-best/) doi:10.1109/CVPRW.2016.152BibTeX
@inproceedings{lan2016cvprw-best,
title = {{The Best of BothWorlds: Combining Data-Independent and Data-Driven Approaches for Action Recognition}},
author = {Lan, Zhen-Zhong and Yu, Shoou-I and Yao, Dezhong and Lin, Ming and Raj, Bhiksha and Hauptmann, Alexander G.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2016},
pages = {1196-1205},
doi = {10.1109/CVPRW.2016.152},
url = {https://mlanthology.org/cvprw/2016/lan2016cvprw-best/}
}