Spatio-Temporal Alignment and Hyperspherical Radon Transform for 3D Gait Recognition in Multi-View Environments
Abstract
This paper presents a view-invariant approach to gait recognition in multi-camera scenarios exploiting a joint spatio-temporal data representation and analysis. First, multi-view information is employed to generate a 3D voxel reconstruction of the scene under study. The analyzed subject is tracked and its centroid and orientation allow recentering and aligning the volume associated to it, thus obtaining a representation invariant to translation, rotation and scaling. Temporal periodicity of the walking cycle is extracted to align the input data in the time domain. Finally, Hyperspherical Radon Transform is presented as an efficient tool to obtain features from spatio-temporal gait templates for classification purposes. Experimental results prove the validity and robustness of the proposed method for gait recognition tasks with several covariates.
Cite
Text
Canton-Ferrer et al. "Spatio-Temporal Alignment and Hyperspherical Radon Transform for 3D Gait Recognition in Multi-View Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010. doi:10.1109/CVPRW.2010.5544615Markdown
[Canton-Ferrer et al. "Spatio-Temporal Alignment and Hyperspherical Radon Transform for 3D Gait Recognition in Multi-View Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010.](https://mlanthology.org/cvprw/2010/cantonferrer2010cvprw-spatiotemporal/) doi:10.1109/CVPRW.2010.5544615BibTeX
@inproceedings{cantonferrer2010cvprw-spatiotemporal,
title = {{Spatio-Temporal Alignment and Hyperspherical Radon Transform for 3D Gait Recognition in Multi-View Environments}},
author = {Canton-Ferrer, Cristian and Casas, Josep R. and Pardàs, Montse},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2010},
pages = {116-121},
doi = {10.1109/CVPRW.2010.5544615},
url = {https://mlanthology.org/cvprw/2010/cantonferrer2010cvprw-spatiotemporal/}
}