Depth-Supervised NeRF: Fewer Views and Faster Training for Free

Abstract

A commonly observed failure mode of Neural Radiance Field (NeRF) is fitting incorrect geometries when given an insufficient number of input views. One potential reason is that standard volumetric rendering does not enforce the constraint that most of a scene's geometry consist of empty space and opaque surfaces. We formalize the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance fields that takes advantage of readily-available depth supervision. We leverage the fact that current NeRF pipelines require images with known camera poses that are typically estimated by running structure-from-motion (SFM). Crucially, SFM also produces sparse 3D points that can be used as "free" depth supervision during training: we add a loss to encourage the distribution of a ray's terminating depth matches a given 3D keypoint, incorporating depth uncertainty. DS-NeRF can render better images given fewer training views while training 2-3x faster. Further, we show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal. And finally, we find that DS-NeRF can support other types of depth supervision such as scanned depth sensors and RGBD reconstruction outputs.

Cite

Text

Deng et al. "Depth-Supervised NeRF: Fewer Views and Faster Training for Free." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01254

Markdown

[Deng et al. "Depth-Supervised NeRF: Fewer Views and Faster Training for Free." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/deng2022cvpr-depthsupervised/) doi:10.1109/CVPR52688.2022.01254

BibTeX

@inproceedings{deng2022cvpr-depthsupervised,
  title     = {{Depth-Supervised NeRF: Fewer Views and Faster Training for Free}},
  author    = {Deng, Kangle and Liu, Andrew and Zhu, Jun-Yan and Ramanan, Deva},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {12882-12891},
  doi       = {10.1109/CVPR52688.2022.01254},
  url       = {https://mlanthology.org/cvpr/2022/deng2022cvpr-depthsupervised/}
}