Few-Shot NeRF by Adaptive Rendering Loss Regularization

Abstract

Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF). Recent works demonstrate that the frequency regularization of Positional Encoding (PE) can achieve promising results for few-shot NeRF. In this work, we reveal that there exists an inconsistency between the frequency regularization of PE and rendering loss. This prevents few-shot NeRF from synthesizing higher-quality novel views. To mitigate this inconsistency, we propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF. Specifically, we present a two-phase rendering supervision and an adaptive rendering loss weight learning strategy to align the frequency relationship between PE and 2D-pixel supervision. In this way, AR-NeRF can learn global structures better in the early training phase and adaptively learn local details throughout the training process. Extensive experiments show that our AR-NeRF achieves state-of-the-art performance on different datasets, including object-level and complex scenes. Our code will be available at https://github.com/GhiXu/ AR-NeRF.

Cite

Text

Xu et al. "Few-Shot NeRF by Adaptive Rendering Loss Regularization." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72848-8_8

Markdown

[Xu et al. "Few-Shot NeRF by Adaptive Rendering Loss Regularization." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/xu2024eccv-fewshot/) doi:10.1007/978-3-031-72848-8_8

BibTeX

@inproceedings{xu2024eccv-fewshot,
  title     = {{Few-Shot NeRF by Adaptive Rendering Loss Regularization}},
  author    = {Xu, Qingshan and Yi, Xuanyu and Xu, Jianyao and Tao, Wenbing and Ong, Yew Soon and Zhang, Hanwang},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72848-8_8},
  url       = {https://mlanthology.org/eccv/2024/xu2024eccv-fewshot/}
}