Advancing Few-Shot Novel View Synthesis with Teacher-Student Guided Scene Geometry Refinement

Abstract

The NeRF method excels in realistic view synthesis but requires many input images, limiting practical use with sparse views. To tackle this, we propose teacher-student guided NeRF (TSGNeRF), an effective solution for few-shot novel view synthesis. In our framework, the teacher model handles sparse input images while the student model focuses on training speed and rendering quality. The entire training process is divided into three stages. First, we train the teacher model using sparse views to learn the coarse geometry of the scene and generate multiple pseudo images. Second, the student model is trained using pseudo multi-view images generated by the teacher model, capturing the underlying structure of the scene. Third, the student model is fine-tuned using the original sparse views. The relatively accurate geometry obtained in stage 2 allows for more precise color propagation to unobserved viewpoints, further refining the scene geometry by eliminating existing floating artifacts. This paper is an AIM Challenge paper that describes our solution for the Sparse Neural Rendering task, including both track 1 and track 2. Experiments demonstrate that our framework achieves the best performance on multiple benchmark datasets, outperforming SOTA few-shot novel view synthesis methods.

Cite

Text

Xing et al. "Advancing Few-Shot Novel View Synthesis with Teacher-Student Guided Scene Geometry Refinement." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91856-8_12

Markdown

[Xing et al. "Advancing Few-Shot Novel View Synthesis with Teacher-Student Guided Scene Geometry Refinement." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/xing2024eccvw-advancing/) doi:10.1007/978-3-031-91856-8_12

BibTeX

@inproceedings{xing2024eccvw-advancing,
  title     = {{Advancing Few-Shot Novel View Synthesis with Teacher-Student Guided Scene Geometry Refinement}},
  author    = {Xing, Yan and Wang, Pan and Guo, Yali and Wu, Yongxin and Liu, Shuangguan and Cai, Youcheng and Liu, Ligang},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {195-211},
  doi       = {10.1007/978-3-031-91856-8_12},
  url       = {https://mlanthology.org/eccvw/2024/xing2024eccvw-advancing/}
}