Learning Temporal Sequence Model from Partially Labeled Data

Abstract

Graphical models are often used to represent and recognize activities. Purely unsupervised methods (such as HMMs) can be trained automatically but yield models whose internal structure - the nodes - are difficult to interpret semantically. Manually constructed networks typically have nodes corresponding to sub-events, but the programming and training of these networks is tedious and requires extensive domain expertise. In this paper, we propose a semi-supervised approach in which a manually structured, Propagation Network (a form of a DBN) is initialized from a small amount of fully annotated data, and then refined by an EM-based learning method in an unsupervised fashion. During node refinement (the M step) a boosting-based algorithm is employed to train the evidence detectors of individual nodes. Experiments on a variety of data types - vision and inertial measurements - in several tasks demonstrate the ability to learn from as little as one fully annotated example accompanied by a small number of positive but non-annotated training examples. The system is applied to both recognition and anomaly detection tasks.

Cite

Text

Shi et al. "Learning Temporal Sequence Model from Partially Labeled Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.174

Markdown

[Shi et al. "Learning Temporal Sequence Model from Partially Labeled Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/shi2006cvpr-learning/) doi:10.1109/CVPR.2006.174

BibTeX

@inproceedings{shi2006cvpr-learning,
  title     = {{Learning Temporal Sequence Model from Partially Labeled Data}},
  author    = {Shi, Yifan and Bobick, Aaron F. and Essa, Irfan A.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2006},
  pages     = {1631-1638},
  doi       = {10.1109/CVPR.2006.174},
  url       = {https://mlanthology.org/cvpr/2006/shi2006cvpr-learning/}
}