MonoSOWA: Scalable Monocular 3D Object Detector Without Human Annotations

Abstract

Inferring object 3D position and orientation from a single RGB camera is a foundational task in computer vision with many important applications. Traditionally, 3D object detection methods are trained in a fully-supervised setup, requiring LiDAR and vast amounts of human annotations, which are laborious, costly, and do not scale well with the ever-increasing amounts of data being captured.We present a novel method to train a 3D object detector from a single RGB camera without domain-specific human annotations, making orders of magnitude more data available for training. The method uses newly proposed Local Object Motion Model to disentangle object movement source between subsequent frames, is approximately 700 times faster than previous work and compensates camera focal length differences to aggregate multiple datasets.The method is evaluated on three public datasets, where despite using no human labels, it outperforms prior work by a significant margin. It also shows its versatility as a pre-training tool for fully-supervised training and shows that combining pseudo-labels from multiple datasets can achieve comparable accuracy to using human labels from a single dataset.

Cite

Text

Skvrna and Neumann. "MonoSOWA: Scalable Monocular 3D Object Detector Without Human Annotations." International Conference on Computer Vision, 2025.

Markdown

[Skvrna and Neumann. "MonoSOWA: Scalable Monocular 3D Object Detector Without Human Annotations." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/skvrna2025iccv-monosowa/)

BibTeX

@inproceedings{skvrna2025iccv-monosowa,
  title     = {{MonoSOWA: Scalable Monocular 3D Object Detector Without Human Annotations}},
  author    = {Skvrna, Jan and Neumann, Lukas},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {7613-7623},
  url       = {https://mlanthology.org/iccv/2025/skvrna2025iccv-monosowa/}
}