Unsupervised Generation of a Viewpoint Annotated Car Dataset from Videos
Abstract
Object recognition approaches have recently been extended to yield, aside of the object class output, also viewpoint or pose. Training such approaches typically requires additional viewpoint or keypoint annotation in the training data or, alternatively, synthetic CAD models. In this paper,we present an approach that creates a dataset of images annotated with bounding boxes and viewpoint labels in a fully automated manner from videos. We assume that the scene is static in order to reconstruct 3D surfaces via structure from motion. We automatically detect when the reconstruction fails and normalize for the viewpoint of the 3D models by aligning the reconstructed point clouds. Exemplarily for cars we show that we can expand a large dataset of annotated single images and obtain improved performance when training a viewpoint regressor on this joined dataset.
Cite
Text
Sedaghat and Brox. "Unsupervised Generation of a Viewpoint Annotated Car Dataset from Videos." International Conference on Computer Vision, 2015.Markdown
[Sedaghat and Brox. "Unsupervised Generation of a Viewpoint Annotated Car Dataset from Videos." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/sedaghat2015iccv-unsupervised/)BibTeX
@inproceedings{sedaghat2015iccv-unsupervised,
title = {{Unsupervised Generation of a Viewpoint Annotated Car Dataset from Videos}},
author = {Sedaghat, Nima and Brox, Thomas},
booktitle = {International Conference on Computer Vision},
year = {2015},
url = {https://mlanthology.org/iccv/2015/sedaghat2015iccv-unsupervised/}
}