Watch and Learn: Semi-Supervised Learning for Object Detectors from Video
Abstract
We present a semi-supervised approach that localizes multiple unknown object instances in long videos. We start with a handful of labeled boxes and iteratively learn and label hundreds of thousands of object instances. We propose criteria for reliable object detection and tracking for constraining the semi-supervised learning process and minimizing semantic drift. Our approach does not assume exhaustive labeling of each object instance in any single frame, or any explicit annotation of negative data. Working in such a generic setting allow us to tackle multiple object instances in video, many of which are static. In contrast, existing approaches either do not consider multiple object instances per video, or rely heavily on the motion of the objects present. The experiments demonstrate the effectiveness of our approach by evaluating the automatically labeled data on a variety of metrics like quality, coverage (recall), diversity, and relevance to training an object detector.
Cite
Text
Misra et al. "Watch and Learn: Semi-Supervised Learning for Object Detectors from Video." Conference on Computer Vision and Pattern Recognition, 2015.Markdown
[Misra et al. "Watch and Learn: Semi-Supervised Learning for Object Detectors from Video." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/misra2015cvpr-watch/)BibTeX
@inproceedings{misra2015cvpr-watch,
title = {{Watch and Learn: Semi-Supervised Learning for Object Detectors from Video}},
author = {Misra, Ishan and Shrivastava, Abhinav and Hebert, Martial},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
url = {https://mlanthology.org/cvpr/2015/misra2015cvpr-watch/}
}