RobustNeRF: Ignoring Distractors with Robust Losses
Abstract
Neural radiance fields (NeRF) excel at synthesizing new views given multi-view, calibrated images of a static scene. When scenes include distractors, which are not persistent during image capture (moving objects, lighting variations, shadows), artifacts appear as view-dependent effects or 'floaters'. To cope with distractors, we advocate a form of robust estimation for NeRF training, modeling distractors in training data as outliers of an optimization problem. Our method successfully removes outliers from a scene and improves upon our baselines, on synthetic and real-world scenes. Our technique is simple to incorporate in modern NeRF frameworks, with few hyper-parameters. It does not assume a priori knowledge of the types of distractors, and is instead focused on the optimization problem rather than pre-processing or modeling transient objects. More results on our page https://robustnerf.github.io/public.
Cite
Text
Sabour et al. "RobustNeRF: Ignoring Distractors with Robust Losses." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01976Markdown
[Sabour et al. "RobustNeRF: Ignoring Distractors with Robust Losses." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/sabour2023cvpr-robustnerf/) doi:10.1109/CVPR52729.2023.01976BibTeX
@inproceedings{sabour2023cvpr-robustnerf,
title = {{RobustNeRF: Ignoring Distractors with Robust Losses}},
author = {Sabour, Sara and Vora, Suhani and Duckworth, Daniel and Krasin, Ivan and Fleet, David J. and Tagliasacchi, Andrea},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {20626-20636},
doi = {10.1109/CVPR52729.2023.01976},
url = {https://mlanthology.org/cvpr/2023/sabour2023cvpr-robustnerf/}
}