Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video
Abstract
We study the problem of supervised learning of event classes in a simple temporal event-description language. We give lower and upper bounds and algorithms for the subsumption and generalization problems for two expressively powerful subsets of this logic, and present a positive-examples-only specific-to-general learning method based on the resulting algorithms. We also present a polynomial-time computable "syntactic" subsumption test that implies semantic subsumption without being equivalent to it. A generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. A companion paper shows that our methods can be applied to duplicate the performance of human-coded concepts in the substantial application domain of video event recognition.
Cite
Text
Fern et al. "Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video." Journal of Artificial Intelligence Research, 2002. doi:10.1613/JAIR.1050Markdown
[Fern et al. "Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video." Journal of Artificial Intelligence Research, 2002.](https://mlanthology.org/jair/2002/fern2002jair-specifictogeneral/) doi:10.1613/JAIR.1050BibTeX
@article{fern2002jair-specifictogeneral,
title = {{Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video}},
author = {Fern, Alan and Givan, Robert and Siskind, Jeffrey Mark},
journal = {Journal of Artificial Intelligence Research},
year = {2002},
pages = {379-449},
doi = {10.1613/JAIR.1050},
volume = {17},
url = {https://mlanthology.org/jair/2002/fern2002jair-specifictogeneral/}
}