Neural Mesh-Based Graphics
Abstract
We revisit NPBG, the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG, which has been trained on ScanNet and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset (DTU and Tanks and Temples) and then scene finetuned, in spite of their deeper neural renderer.
Cite
Text
Jena et al. "Neural Mesh-Based Graphics." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25066-8_45Markdown
[Jena et al. "Neural Mesh-Based Graphics." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/jena2022eccvw-neural/) doi:10.1007/978-3-031-25066-8_45BibTeX
@inproceedings{jena2022eccvw-neural,
title = {{Neural Mesh-Based Graphics}},
author = {Jena, Shubhendu and Multon, Franck and Boukhayma, Adnane},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {739-757},
doi = {10.1007/978-3-031-25066-8_45},
url = {https://mlanthology.org/eccvw/2022/jena2022eccvw-neural/}
}