Temporal Action Detection with Structured Segment Networks
Abstract
Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS'14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures.
Cite
Text
Zhao et al. "Temporal Action Detection with Structured Segment Networks." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.317Markdown
[Zhao et al. "Temporal Action Detection with Structured Segment Networks." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/zhao2017iccv-temporal/) doi:10.1109/ICCV.2017.317BibTeX
@inproceedings{zhao2017iccv-temporal,
title = {{Temporal Action Detection with Structured Segment Networks}},
author = {Zhao, Yue and Xiong, Yuanjun and Wang, Limin and Wu, Zhirong and Tang, Xiaoou and Lin, Dahua},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.317},
url = {https://mlanthology.org/iccv/2017/zhao2017iccv-temporal/}
}