Learning to Detect and Retrieve Objects from Unlabeled Videos

Abstract

Learning an object detection or retrieval system requires a large data set with manual annotations. Such data sets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose to exploit the natural correlation in narrations and the visual presence of objects in video, to learn an object detector and retrieval without any manual labeling involved. We pose the problem as weakly supervised learning with noisy labels, and propose a novel object detection paradigm under these constraints. We handle the background rejection by using contrastive samples and confront the high level of label noise with a new clustering score. Our evaluation is based on a set of 11 manually annotated objects in over 5000 frames. We show comparison to a weakly-supervised approach as baseline and provide a strongly labeled upper bound.

Cite

Text

Amrani et al. "Learning to Detect and Retrieve Objects from Unlabeled Videos." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00567

Markdown

[Amrani et al. "Learning to Detect and Retrieve Objects from Unlabeled Videos." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/amrani2019iccvw-learning/) doi:10.1109/ICCVW.2019.00567

BibTeX

@inproceedings{amrani2019iccvw-learning,
  title     = {{Learning to Detect and Retrieve Objects from Unlabeled Videos}},
  author    = {Amrani, Elad and Ben-Ari, Rami and Hakim, Tal and Bronstein, Alex M.},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2019},
  pages     = {3713-3717},
  doi       = {10.1109/ICCVW.2019.00567},
  url       = {https://mlanthology.org/iccvw/2019/amrani2019iccvw-learning/}
}