Learning Temporal, Relational, Force-Dynamic Event Definitions from Video
Abstract
We present and evaluate a novel implemented approach for learning to recognize events in video. First, we introduce a sublanguage of event logic, called k-AMA, that is sufficiently expressive to represent visual events yet sufficiently restrictive to support learning. Second, we develop a specific-to-general learning algorithm for learning event definitions in k-AMA. Finally, we apply this algorithm to the task of learning event definitions from video and show that it yields definitions that are competitive with hand-coded ones.
Cite
Text
Fern et al. "Learning Temporal, Relational, Force-Dynamic Event Definitions from Video." AAAI Conference on Artificial Intelligence, 2002. doi:10.5555/777092.777120Markdown
[Fern et al. "Learning Temporal, Relational, Force-Dynamic Event Definitions from Video." AAAI Conference on Artificial Intelligence, 2002.](https://mlanthology.org/aaai/2002/fern2002aaai-learning/) doi:10.5555/777092.777120BibTeX
@inproceedings{fern2002aaai-learning,
title = {{Learning Temporal, Relational, Force-Dynamic Event Definitions from Video}},
author = {Fern, Alan and Siskind, Jeffrey Mark and Givan, Robert},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2002},
pages = {159-166},
doi = {10.5555/777092.777120},
url = {https://mlanthology.org/aaai/2002/fern2002aaai-learning/}
}