AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis

Abstract

We explore the task of geometric reconstruction of images captured from a mixture of ground and aerial views. Current state-of-the-art learning-based approaches fail to handle the extreme viewpoint variation between aerial-ground image pairs. Our hypothesis is that the lack of high-quality, co-registered aerial-ground datasets for training is a key reason for this failure. Such data is difficult to assemble precisely because it is difficult to reconstruct in a scalable way. To overcome this challenge, we propose a scalable framework combining pseudo-synthetic renderings from 3D city-wide meshes (e.g., Google Earth) with real, ground-level crowd-sourced images (e.g., MegaDepth). The pseudo-synthetic data simulates a wide range of aerial viewpoints, while the real, crowd-sourced images help improve visual fidelity for ground-level images where mesh-based renderings lack sufficient detail, effectively bridging the domain gap between real images and pseudo-synthetic renderings. Using this hybrid dataset, we fine-tune several state-of-the-art algorithms and achieve significant improvements on real-world, zero-shot aerial-ground tasks. For example, we observe that baseline DUSt3R localizes fewer than 5% of aerial-ground pairs within 5 degrees of camera rotation error, while fine-tuning with our data raises accuracy to nearly 56%, addressing a major failure point in handling large viewpoint changes. Beyond camera estimation and scene reconstruction, our dataset also improves performance on downstream tasks like novel-view synthesis in challenging aerial-ground scenarios, demonstrating the practical value of our approach in real-world applications.

Cite

Text

Vuong et al. "AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02019

Markdown

[Vuong et al. "AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/vuong2025cvpr-aerialmegadepth/) doi:10.1109/CVPR52734.2025.02019

BibTeX

@inproceedings{vuong2025cvpr-aerialmegadepth,
  title     = {{AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis}},
  author    = {Vuong, Khiem and Ghosh, Anurag and Ramanan, Deva and Narasimhan, Srinivasa and Tulsiani, Shubham},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {21674-21684},
  doi       = {10.1109/CVPR52734.2025.02019},
  url       = {https://mlanthology.org/cvpr/2025/vuong2025cvpr-aerialmegadepth/}
}