A Hough Transform-Based Voting Framework for Action Recognition
Abstract
We present a method to classify and localize human actions in video using a Hough transform voting framework. Random trees are trained to learn a mapping between densely-sampled feature patches and their corresponding votes in a spatio-temporal-action Hough space. The leaves of the trees form a discriminative multi-class codebook that share features between the action classes and vote for action centers in a probabilistic manner. Using low-level features such as gradients and optical flow, we demonstrate that Hough-voting can achieve state-of-the-art performance on several datasets covering a wide range of action-recognition scenarios. ©2010 IEEE.
Cite
Text
Yao et al. "A Hough Transform-Based Voting Framework for Action Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539883Markdown
[Yao et al. "A Hough Transform-Based Voting Framework for Action Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/yao2010cvpr-hough/) doi:10.1109/CVPR.2010.5539883BibTeX
@inproceedings{yao2010cvpr-hough,
title = {{A Hough Transform-Based Voting Framework for Action Recognition}},
author = {Yao, Angela and Gall, Jürgen and Van Gool, Luc},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2010},
pages = {2061-2068},
doi = {10.1109/CVPR.2010.5539883},
url = {https://mlanthology.org/cvpr/2010/yao2010cvpr-hough/}
}