NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects
Abstract
We propose a novel-view augmentation (NOVA) strategy to train NeRFs for photo-realistic 3D composition of dynamic objects in a static scene. Compared to prior work, our framework significantly reduces blending artifacts when inserting multiple dynamic objects into a 3D scene at novel views and times; achieves comparable PSNR without the need for additional ground truth modalities like optical flow; and overall provides ease, flexibility, and scalability in neural composition. Our codebase is on GitHub.
Cite
Text
Agrawal et al. "NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00463Markdown
[Agrawal et al. "NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/agrawal2023iccvw-nova/) doi:10.1109/ICCVW60793.2023.00463BibTeX
@inproceedings{agrawal2023iccvw-nova,
title = {{NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects}},
author = {Agrawal, Dakshit and Xu, Jiajie and Mustikovela, Siva Karthik and Gkioulekas, Ioannis and Shrivastava, Ashish and Chai, Yuning},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2023},
pages = {4290-4294},
doi = {10.1109/ICCVW60793.2023.00463},
url = {https://mlanthology.org/iccvw/2023/agrawal2023iccvw-nova/}
}