Unsupervised Random Forest Indexing for Fast Action Search
Abstract
Despite recent successes of searching small object in images, it remains a challenging problem to search and locate actions in crowded videos because of (1) the large variations of human actions and (2) the intensive computational cost of searching the video space. To address these challenges, we propose a fast action search and localization method that supports relevance feedback from the user. By characterizing videos as spatio-temporal interest points and building a random forest to index and match these points, our query matching is robust and efficient. To enable efficient action localization, we propose a coarse-to-fine sub-volume search scheme, which is several orders faster than the existing video branch and bound search. The challenging cross-dataset search of several actions validates the effectiveness and efficiency of our method.
Cite
Text
Yu et al. "Unsupervised Random Forest Indexing for Fast Action Search." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995488Markdown
[Yu et al. "Unsupervised Random Forest Indexing for Fast Action Search." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/yu2011cvpr-unsupervised/) doi:10.1109/CVPR.2011.5995488BibTeX
@inproceedings{yu2011cvpr-unsupervised,
title = {{Unsupervised Random Forest Indexing for Fast Action Search}},
author = {Yu, Gang and Yuan, Junsong and Liu, Zicheng},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {865-872},
doi = {10.1109/CVPR.2011.5995488},
url = {https://mlanthology.org/cvpr/2011/yu2011cvpr-unsupervised/}
}