S2RF: Semantically Stylized Radiance Fields
Abstract
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene. Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints. To achieve this, we propose a novel approach that incorporates nearest neighborhood-based loss, allowing for flexible 3D scene reconstruction while effectively capturing intricate style details and ensuring multi-view consistency.
Cite
Text
Kumar et al. "S2RF: Semantically Stylized Radiance Fields." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00317Markdown
[Kumar et al. "S2RF: Semantically Stylized Radiance Fields." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/kumar2023iccvw-s2rf/) doi:10.1109/ICCVW60793.2023.00317BibTeX
@inproceedings{kumar2023iccvw-s2rf,
title = {{S2RF: Semantically Stylized Radiance Fields}},
author = {Kumar, Moneish and Panse, Neeraj and Lahiri, Dishani},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2023},
pages = {2944-2949},
doi = {10.1109/ICCVW60793.2023.00317},
url = {https://mlanthology.org/iccvw/2023/kumar2023iccvw-s2rf/}
}