Automatic Discovery of Action Taxonomies from Multiple Views
Abstract
We present a new method for segmenting actions into primitives and classifying them into a hierarchy of action classes. Our scheme learns action classes in an unsupervised manner using examples recorded by multiple cameras. Segmentation and clustering of action classes is based on a recently proposed motion descriptor which can be extracted efficiently from reconstructed volume sequences. Because our representation is independent of viewpoint, it results in segmentation and classification methods which are surprisingly efficient and robust. Our new method can be used as the first step in a semi-supervised action recognition system that will automatically break down training examples of people performing sequences of actions into primitive actions that can be discriminatingly classified and assembled into high-level recognizers. 1
Cite
Text
Weinland et al. "Automatic Discovery of Action Taxonomies from Multiple Views." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.65Markdown
[Weinland et al. "Automatic Discovery of Action Taxonomies from Multiple Views." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/weinland2006cvpr-automatic/) doi:10.1109/CVPR.2006.65BibTeX
@inproceedings{weinland2006cvpr-automatic,
title = {{Automatic Discovery of Action Taxonomies from Multiple Views}},
author = {Weinland, Daniel and Ronfard, Rémi and Boyer, Edmond},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2006},
pages = {1639-1645},
doi = {10.1109/CVPR.2006.65},
url = {https://mlanthology.org/cvpr/2006/weinland2006cvpr-automatic/}
}