Mapillary Planet-Scale Depth Dataset

Abstract

Learning-based methods produce remarkable results on single image depth tasks when trained on well-established benchmarks, however, there is a large gap from these benchmarks to real-world performance that is usually obscured by the common practice of fine-tuning on the target dataset. We introduce a new depth dataset that is an order of magnitude larger than previous offerings, but more importantly, contains an unprecedented gamut of locations, camera models and scene types while offering metric depth (not just up-to-scale). Additionally, we investigate the problem of training single image depth networks using images captured with many different cameras, validating an existing approach and proposing a simpler alternative. With our contributions we achieve excellent results on challenging benchmarks before fine-tuning, and set the state of the art on the popular KITTI dataset after fine-tuning.

Cite

Text

Antequera et al. "Mapillary Planet-Scale Depth Dataset." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58536-5_35

Markdown

[Antequera et al. "Mapillary Planet-Scale Depth Dataset." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/antequera2020eccv-mapillary/) doi:10.1007/978-3-030-58536-5_35

BibTeX

@inproceedings{antequera2020eccv-mapillary,
  title     = {{Mapillary Planet-Scale Depth Dataset}},
  author    = {Antequera, Manuel López and Gargallo, Pau and Hofinger, Markus and Bulò, Samuel Rota and Kuang, Yubin and Kontschieder, Peter},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58536-5_35},
  url       = {https://mlanthology.org/eccv/2020/antequera2020eccv-mapillary/}
}