Learning Dynamic Event Descriptions in Image Sequences
Abstract
Automatic detection of dynamic events in video sequences has a variety of applications including visual surveillance and monitoring, video highlight extraction, intelligent transportation systems, video summarization, and many more. Learning an accurate description of the various events in real-world scenes is challenging owing to the limited user-labeled data as well as the large variations in the pattern of the events. Pattern differences arise either due to the nature of the events themselves such as the spatio-temporal events or due to missing or ambiguous data interpretation using computer vision methods. In this work, we introduce a novel method for representing and classifying events in video sequences using reversible context-free grammars. The grammars are learned using a semi-supervised learning method. More concretely, by using the classification entropy as a heuristic cost function, the grammars are iteratively learned using a search method. Experimental results demonstrating the efficacy of the learning algorithm and the event detection method applied to traffic video sequences are presented.
Cite
Text
Veeraraghavan et al. "Learning Dynamic Event Descriptions in Image Sequences." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383075Markdown
[Veeraraghavan et al. "Learning Dynamic Event Descriptions in Image Sequences." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/veeraraghavan2007cvpr-learning/) doi:10.1109/CVPR.2007.383075BibTeX
@inproceedings{veeraraghavan2007cvpr-learning,
title = {{Learning Dynamic Event Descriptions in Image Sequences}},
author = {Veeraraghavan, Harini and Papanikolopoulos, Nikolaos and Schrater, Paul R.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2007},
doi = {10.1109/CVPR.2007.383075},
url = {https://mlanthology.org/cvpr/2007/veeraraghavan2007cvpr-learning/}
}