Action Exemplar Based Real-Time Action Detection
Abstract
We propose a real-time action detection system based on a novel action representation and an effective learning method with a small training set. We represent actions with a new feature that measures the ¿global¿ distance from a set of action exemplars, where action exemplars are constructed from a vocabulary that encodes ¿local¿ instantaneous body motions. A cascade of linear SVM is used to learn target actions, where at each layer a selective set of exemplars is chosen and variations between locally similar actions are trained in a coarse to fine manner. The method is further extended to incrementally learn a new action with a single example. The method is implemented as a real-time system that can detect actions at frame rate. The performance is extensively validated by evaluating on public and in-house action datasets.
Cite
Text
Jung et al. "Action Exemplar Based Real-Time Action Detection." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457661Markdown
[Jung et al. "Action Exemplar Based Real-Time Action Detection." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/jung2009iccvw-action/) doi:10.1109/ICCVW.2009.5457661BibTeX
@inproceedings{jung2009iccvw-action,
title = {{Action Exemplar Based Real-Time Action Detection}},
author = {Jung, Sang-Hack and Guo, Yanlin and Sawhney, Harpreet S. and Kumar, Rakesh},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2009},
pages = {498-505},
doi = {10.1109/ICCVW.2009.5457661},
url = {https://mlanthology.org/iccvw/2009/jung2009iccvw-action/}
}