ROGR: Relightable 3D Objects Using Generative Relighting
Abstract
We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views, driven by a generative relighting model that simulates the effects of placing the object under novel environment illuminations. Our method samples the appearance of the object under multiple lighting environments, creating a dataset that is used to train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting. The lighting-conditioned NeRF uses a novel dual-branch architecture to encode the general lighting effects and specularities separately. The optimized lighting-conditioned NeRF enables efficient feed-forward relighting under arbitrary environment maps without requiring per-illumination optimization or light transport simulation. We evaluate our approach on the established TensoIR and Stanford-ORB datasets, where it improves upon the state-of-the-art on most metrics, and showcase our approach on real-world object captures.
Cite
Text
Tang et al. "ROGR: Relightable 3D Objects Using Generative Relighting." Advances in Neural Information Processing Systems, 2025.Markdown
[Tang et al. "ROGR: Relightable 3D Objects Using Generative Relighting." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/tang2025neurips-rogr/)BibTeX
@inproceedings{tang2025neurips-rogr,
title = {{ROGR: Relightable 3D Objects Using Generative Relighting}},
author = {Tang, Jiapeng and Levine, Matthew Jacob and Verbin, Dor and Garbin, Stephan J. and Nießner, Matthias and Brualla, Ricardo Martin and Srinivasan, Pratul P. and Henzler, Philipp},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/tang2025neurips-rogr/}
}