Video Imprint Segmentation for Temporal Action Detection in Untrimmed Videos
Abstract
We propose a temporal action detection by spatial segmentation framework, which simultaneously categorize actions and temporally localize action instances in untrimmed videos. The core idea is the conversion of temporal detection task into a spatial semantic segmentation task. Firstly, the video imprint representation is employed to capture the spatial/temporal interdependences within/among frames and represent them as spatial proximity in a feature space. Subsequently, the obtained imprint representation is spatially segmented by a fully convolutional network. With such segmentation labels projected back to the video space, both temporal action boundary localization and per-frame spatial annotation can be obtained simultaneously. The proposed framework is robust to variable lengths of untrimmed videos, due to the underlying fixed-size imprint representations. The efficacy of the framework is validated in two public action detection datasets.
Cite
Text
Gao et al. "Video Imprint Segmentation for Temporal Action Detection in Untrimmed Videos." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33018328Markdown
[Gao et al. "Video Imprint Segmentation for Temporal Action Detection in Untrimmed Videos." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/gao2019aaai-video/) doi:10.1609/AAAI.V33I01.33018328BibTeX
@inproceedings{gao2019aaai-video,
title = {{Video Imprint Segmentation for Temporal Action Detection in Untrimmed Videos}},
author = {Gao, Zhanning and Wang, Le and Zhang, Qilin and Niu, Zhenxing and Zheng, Nanning and Hua, Gang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {8328-8335},
doi = {10.1609/AAAI.V33I01.33018328},
url = {https://mlanthology.org/aaai/2019/gao2019aaai-video/}
}