Putting Objects in Perspective
Abstract
Image understanding requires not only individually estimating elements of the visual world but also capturing the interplay among them. In this paper, we provide a framework for placing local object detection in the context of the overall 3D scene by modeling the interdependence of objects, surface orientations, and camera viewpoint. Most object detection methods consider all scales and locations in the image as equally likely. We show that with probabilistic estimates of 3D geometry, both in terms of surfaces and world coordinates, we can put objects into perspective and model the scale and location variance in the image. Our approach reflects the cyclical nature of the problem by allowing probabilistic object hypotheses to refine geometry and vice-versa. Our framework allows painless substitution of almost any object detector and is easily extended to include other aspects of image understanding. Our results confirm the benefits of our integrated approach.
Cite
Text
Hoiem et al. "Putting Objects in Perspective." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.232Markdown
[Hoiem et al. "Putting Objects in Perspective." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/hoiem2006cvpr-putting/) doi:10.1109/CVPR.2006.232BibTeX
@inproceedings{hoiem2006cvpr-putting,
title = {{Putting Objects in Perspective}},
author = {Hoiem, Derek and Efros, Alexei A. and Hebert, Martial},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2006},
pages = {2137-2144},
doi = {10.1109/CVPR.2006.232},
url = {https://mlanthology.org/cvpr/2006/hoiem2006cvpr-putting/}
}