AuraFusion360: Augmented Unseen Region Alignment for Reference-Based 360deg Unbounded Scene Inpainting

Abstract

Three-dimensional scene inpainting is crucial for applications from virtual reality to architectural visualization, yet existing methods struggle with view consistency and geometric accuracy in 360deg unbounded scenes. We present AuraFusion360, a novel reference-based method that enables high-quality object removal and hole filling in 3D scenes represented by Gaussian Splatting. Our approach introduces (1) depth-aware unseen mask generation for accurate occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot method for accurate initial point placement without requiring additional training, and (3) SDEdit-based detail enhancement for multi-view coherence. We also introduce 360-USID, the first comprehensive dataset for 360deg unbounded scene inpainting with ground truth. Extensive experiments demonstrate that AuraFusion360 significantly outperforms existing methods, achieving superior perceptual quality while maintaining geometric accuracy across dramatic viewpoint changes.

Cite

Text

Wu et al. "AuraFusion360: Augmented Unseen Region Alignment for Reference-Based 360deg Unbounded Scene Inpainting." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01526

Markdown

[Wu et al. "AuraFusion360: Augmented Unseen Region Alignment for Reference-Based 360deg Unbounded Scene Inpainting." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/wu2025cvpr-aurafusion360/) doi:10.1109/CVPR52734.2025.01526

BibTeX

@inproceedings{wu2025cvpr-aurafusion360,
  title     = {{AuraFusion360: Augmented Unseen Region Alignment for Reference-Based 360deg Unbounded Scene Inpainting}},
  author    = {Wu, Chung-Ho and Chen, Yang-Jung and Chen, Ying-Huan and Lee, Jie-Ying and Ke, Bo-Hsu and Mu, Chun-Wei Tuan and Huang, Yi-Chuan and Lin, Chin-Yang and Chen, Min-Hung and Lin, Yen-Yu and Liu, Yu-Lun},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {16366-16376},
  doi       = {10.1109/CVPR52734.2025.01526},
  url       = {https://mlanthology.org/cvpr/2025/wu2025cvpr-aurafusion360/}
}