Activity Recognition Using Dynamic Subspace Angles
Abstract
Cameras are ubiquitous everywhere and hold the promise of significantly changing the way we live and interact with our environment. Human activity recognition is central to understanding dynamic scenes for applications ranging from security surveillance, to assisted living for the elderly, to video gaming without controllers. Most current approaches to solve this problem are based in the use of local temporal-spatial features that limit their ability to recognize long and complex actions. In this paper, we propose a new approach to exploit the temporal information encoded in the data. The main idea is to model activities as the output of unknown dynamic systems evolving from unknown initial conditions. Under this framework, we show that activity videos can be compared by computing the principal angles between subspaces representing activity types which are found by a simple SVD of the experimental data. The proposed approach outperforms state-of-the-art methods classifying activities in the KTH dataset as well as in much more complex scenarios involving interacting actors.
Cite
Text
Li et al. "Activity Recognition Using Dynamic Subspace Angles." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995672Markdown
[Li et al. "Activity Recognition Using Dynamic Subspace Angles." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/li2011cvpr-activity/) doi:10.1109/CVPR.2011.5995672BibTeX
@inproceedings{li2011cvpr-activity,
title = {{Activity Recognition Using Dynamic Subspace Angles}},
author = {Li, Binlong and Ayazoglu, Mustafa and Mao, Teresa and Camps, Octavia I. and Sznaier, Mario},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {3193-3200},
doi = {10.1109/CVPR.2011.5995672},
url = {https://mlanthology.org/cvpr/2011/li2011cvpr-activity/}
}