Multi-View Action Recognition One Camera at a Time
Abstract
For human action recognition methods, there is often a trade-off between classification accuracy and computational efficiency. Methods that include 3D information from multiple cameras are often computationally expensive and not suitable for real-time application. 2D, frame-based methods are generally more efficient, but suffer from lower recognition accuracies. In this paper, we present a hybrid keypose-based method that operates in a multi-camera environment, but uses only a single camera at a time. We learn, for each keypose, the relative utility of a particular viewpoint compared with switching to a different available camera in the network for future classification. On a benchmark multi-camera action recognition dataset, our method outperforms approaches that incorporate all available cameras.
Cite
Text
Spurlock and Souvenir. "Multi-View Action Recognition One Camera at a Time." IEEE/CVF Winter Conference on Applications of Computer Vision, 2014. doi:10.1109/WACV.2014.6836047Markdown
[Spurlock and Souvenir. "Multi-View Action Recognition One Camera at a Time." IEEE/CVF Winter Conference on Applications of Computer Vision, 2014.](https://mlanthology.org/wacv/2014/spurlock2014wacv-multi/) doi:10.1109/WACV.2014.6836047BibTeX
@inproceedings{spurlock2014wacv-multi,
title = {{Multi-View Action Recognition One Camera at a Time}},
author = {Spurlock, Scott and Souvenir, Richard},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2014},
pages = {604-609},
doi = {10.1109/WACV.2014.6836047},
url = {https://mlanthology.org/wacv/2014/spurlock2014wacv-multi/}
}