Social Behavior Recognition in Continuous Video
Abstract
We present a novel method for analyzing social behavior. Continuous videos are segmented into action `bouts' by building a temporal context model that combines features from spatio-temporal energy and agent trajectories. The method is tested on an unprecedented dataset of videos of interacting pairs of mice, which was collected as part of a state-of-the-art neurophysiological study of behavior. The dataset comprises over 88 hours (8 million frames) of annotated videos. We find that our novel trajectory features, used in a discriminative framework, are more informative than widely used spatio-temporal features; furthermore, temporal context plays an important role for action recognition in continuous videos. Our approach may be seen as a baseline method on this dataset, reaching a mean recognition rate of 61.2% compared to the expert's agreement rate of about 70%.
Cite
Text
Burgos-Artizzu et al. "Social Behavior Recognition in Continuous Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247817Markdown
[Burgos-Artizzu et al. "Social Behavior Recognition in Continuous Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/burgosartizzu2012cvpr-social/) doi:10.1109/CVPR.2012.6247817BibTeX
@inproceedings{burgosartizzu2012cvpr-social,
title = {{Social Behavior Recognition in Continuous Video}},
author = {Burgos-Artizzu, Xavier P. and Dollár, Piotr and Lin, Dayu and Anderson, David J. and Perona, Pietro},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2012},
pages = {1322-1329},
doi = {10.1109/CVPR.2012.6247817},
url = {https://mlanthology.org/cvpr/2012/burgosartizzu2012cvpr-social/}
}