Depth2Action: Exploring Embedded Depth for Large-Scale Action Recognition

Abstract

This paper performs the first investigation into depth for large-scale human action recognition in video where the depth cues are estimated from the videos themselves. We develop a new framework called depth2action and experiment thoroughly into how best to incorporate the depth information. We introduce spatio-temporal depth normalization (STDN) to enforce temporal consistency in our estimated depth sequences. We also propose modified depth motion maps (MDMM) to capture the subtle temporal changes in depth. These two components significantly improve the action recognition performance. We evaluate our depth2action framework on three large-scale action recognition video benchmarks. Our model achieves state-of-the-art performance when combined with appearance and motion information thus demonstrating that depth2action is indeed complementary to existing approaches.

Cite

Text

Zhu and Newsam. "Depth2Action: Exploring Embedded Depth for Large-Scale Action Recognition." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46604-0_47

Markdown

[Zhu and Newsam. "Depth2Action: Exploring Embedded Depth for Large-Scale Action Recognition." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/zhu2016eccv-depth/) doi:10.1007/978-3-319-46604-0_47

BibTeX

@inproceedings{zhu2016eccv-depth,
  title     = {{Depth2Action: Exploring Embedded Depth for Large-Scale Action Recognition}},
  author    = {Zhu, Yi and Newsam, Shawn D.},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {668-684},
  doi       = {10.1007/978-3-319-46604-0_47},
  url       = {https://mlanthology.org/eccv/2016/zhu2016eccv-depth/}
}