BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling

Abstract

Inferring the 3D structure of a non-rigid dynamic scene from a single moving camera is an under-constrained problem. Inspired by the remarkable progress of neural radiance fields (NeRFs) in photo-realistic novel view synthesis of static scenes, it has also been extended to dynamic settings. Such methods heavily rely on implicit neural priors to regularize the problem. In this work, we take a step back and investigate how current implementations may entail deleterious effects including limited expressiveness, entanglement of light and density fields, and sub-optimal motion localization. Further, we devise a factorisation-based framework that represents the scene as a composition of bandlimited, high-dimensional signals. We demonstrate compelling results across complex dynamic scenes that involve changes in lighting, texture and long-range dynamics.

Cite

Text

Ramasinghe et al. "BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I5.28264

Markdown

[Ramasinghe et al. "BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/ramasinghe2024aaai-blirf/) doi:10.1609/AAAI.V38I5.28264

BibTeX

@inproceedings{ramasinghe2024aaai-blirf,
  title     = {{BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling}},
  author    = {Ramasinghe, Sameera and Shevchenko, Violetta and Avraham, Gil and van den Hengel, Anton},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {4641-4649},
  doi       = {10.1609/AAAI.V38I5.28264},
  url       = {https://mlanthology.org/aaai/2024/ramasinghe2024aaai-blirf/}
}