Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling

Abstract

Novel-view synthesis of specular objects like shiny metals or glossy paints remains a significant challenge. Not only the glossy appearance but also global illumination effects including reflections of other objects in the environment are critical components to faithfully reproduce a scene. In this paper we present Neural Directional Encoding (NDE) a view-dependent appearance encoding of neural radiance fields (NeRF) for rendering specular objects. NDE transfers the concept of feature-grid-based spatial encoding to the angular domain significantly improving the ability to model high-frequency angular signals. In contrast to previous methods that use encoding functions with only angular input we additionally cone-trace spatial features to obtain a spatially varying directional encoding which addresses the challenging interreflection effects. Extensive experiments on both synthetic and real datasets show that a NeRF model with NDE (1) outperforms the state of the art on view synthesis of specular objects and (2) works with small networks to allow fast (real-time) inference. The source code is available at: https://github.com/lwwu2/nde

Cite

Text

Wu et al. "Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01999

Markdown

[Wu et al. "Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wu2024cvpr-neural/) doi:10.1109/CVPR52733.2024.01999

BibTeX

@inproceedings{wu2024cvpr-neural,
  title     = {{Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling}},
  author    = {Wu, Liwen and Bi, Sai and Xu, Zexiang and Luan, Fujun and Zhang, Kai and Georgiev, Iliyan and Sunkavalli, Kalyan and Ramamoorthi, Ravi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {21157-21166},
  doi       = {10.1109/CVPR52733.2024.01999},
  url       = {https://mlanthology.org/cvpr/2024/wu2024cvpr-neural/}
}