DAPs: Deep Action Proposals for Action Understanding
Abstract
Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.
Cite
Text
Escorcia et al. "DAPs: Deep Action Proposals for Action Understanding." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46487-9_47Markdown
[Escorcia et al. "DAPs: Deep Action Proposals for Action Understanding." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/escorcia2016eccv-daps/) doi:10.1007/978-3-319-46487-9_47BibTeX
@inproceedings{escorcia2016eccv-daps,
title = {{DAPs: Deep Action Proposals for Action Understanding}},
author = {Escorcia, Victor and Heilbron, Fabian Caba and Niebles, Juan Carlos and Ghanem, Bernard},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {768-784},
doi = {10.1007/978-3-319-46487-9_47},
url = {https://mlanthology.org/eccv/2016/escorcia2016eccv-daps/}
}