Input-Level Inductive Biases for 3D Reconstruction

Abstract

Much of the recent progress in 3D vision has been driven by the development of specialized architectures that incorporate geometrical inductive biases. In this paper we tackle 3D reconstruction using a domain agnostic architecture and study how instead to inject the same type of inductive biases directly as extra inputs to the model. This approach makes it possible to apply existing general models, such as Perceivers, on this rich domain, without the need for architectural changes, while simultaneously maintaining data efficiency of bespoke models. In particular we study how to encode cameras, projective ray incidence and epipolar geometry as model inputs, and demonstrate competitive multi-view depth estimation performance on multiple benchmarks.

Cite

Text

Yifan et al. "Input-Level Inductive Biases for 3D Reconstruction." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00608

Markdown

[Yifan et al. "Input-Level Inductive Biases for 3D Reconstruction." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/yifan2022cvpr-inputlevel/) doi:10.1109/CVPR52688.2022.00608

BibTeX

@inproceedings{yifan2022cvpr-inputlevel,
  title     = {{Input-Level Inductive Biases for 3D Reconstruction}},
  author    = {Yifan, Wang and Doersch, Carl and Arandjelović, Relja and Carreira, João and Zisserman, Andrew},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {6176-6186},
  doi       = {10.1109/CVPR52688.2022.00608},
  url       = {https://mlanthology.org/cvpr/2022/yifan2022cvpr-inputlevel/}
}