InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules

Abstract

Generalizing Neural Radiance Fields (NeRF) to new scenes is a significant challenge that existing approaches struggle to address without extensive modifications to vanilla NeRF framework. We introduce **InsertNeRF**, a method for **INS**tilling g**E**ne**R**alizabili**T**y into **NeRF**. By utilizing multiple plug-and-play HyperNet modules, InsertNeRF dynamically tailors NeRF's weights to specific reference scenes, transforming multi-scale sampling-aware features into scene-specific representations. This novel design allows for more accurate and efficient representations of complex appearances and geometries. Experiments show that this method not only achieves superior generalization performance but also provides a flexible pathway for integration with other NeRF-like systems, even in sparse input settings. Code will be available at: https://github.com/bbbbby-99/InsertNeRF.

Cite

Text

Bao et al. "InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules." International Conference on Learning Representations, 2024.

Markdown

[Bao et al. "InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/bao2024iclr-insertnerf/)

BibTeX

@inproceedings{bao2024iclr-insertnerf,
  title     = {{InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules}},
  author    = {Bao, Yanqi and Ding, Tianyu and Huo, Jing and Li, Wenbin and Li, Yuxin and Gao, Yang},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/bao2024iclr-insertnerf/}
}