Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving

Abstract

Novel view synthesis is a critical task in autonomous driving. Although 3D Gaussian Splatting (3D-GS) has shown success in generating novel views, it faces challenges in maintaining high-quality rendering when viewpoints deviate significantly from the training set. This difficulty primarily stems from complex lighting conditions and geometric inconsistencies in texture-less regions. To address these issues, we propose an attention-based illumination model that leverages light fields from neighboring views, enhancing the realism of synthesized images. Additionally, we propose a geometry optimization method using planar homography to improve geometric consistency in texture-less regions. Our experiments demonstrate substantial improvements in synthesis quality for large-deviation viewpoints, validating the effectiveness of our approach.

Cite

Text

Ma et al. "Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I6.32641

Markdown

[Ma et al. "Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/ma2025aaai-novel/) doi:10.1609/AAAI.V39I6.32641

BibTeX

@inproceedings{ma2025aaai-novel,
  title     = {{Novel View Synthesis Under Large-Deviation Viewpoint for Autonomous Driving}},
  author    = {Ma, Xin and Zhang, Jiguang and Lu, Peng and Xu, Shibiao and Pan, Chengwei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {6000-6008},
  doi       = {10.1609/AAAI.V39I6.32641},
  url       = {https://mlanthology.org/aaai/2025/ma2025aaai-novel/}
}