Preconditioners for the Stochastic Training of Neural Fields

Abstract

Neural fields encode continuous multidimensional signals as neural networks, enabling diverse applications in computer vision, robotics, and geometry. While Adam is effective for stochastic optimization, it often requires long training times. To address this, we explore alternative optimization techniques to accelerate training without sacrificing accuracy. Traditional second-order methods like L-BFGS are unsuitable for stochastic settings. We propose a theoretical framework for training neural fields with curvature-aware diagonal preconditioners, demonstrating their effectiveness across tasks such as image reconstruction, shape modeling, and Neural Radiance Fields (NeRF).

Cite

Text

Chng et al. "Preconditioners for the Stochastic Training of Neural Fields." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02535

Markdown

[Chng et al. "Preconditioners for the Stochastic Training of Neural Fields." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/chng2025cvpr-preconditioners/) doi:10.1109/CVPR52734.2025.02535

BibTeX

@inproceedings{chng2025cvpr-preconditioners,
  title     = {{Preconditioners for the Stochastic Training of Neural Fields}},
  author    = {Chng, Shin-Fang and Saratchandran, Hemanth and Lucey, Simon},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {27222-27232},
  doi       = {10.1109/CVPR52734.2025.02535},
  url       = {https://mlanthology.org/cvpr/2025/chng2025cvpr-preconditioners/}
}