REOBench: Benchmarking Robustness of Earth Observation Foundation Models

Abstract

Earth observation foundation models have shown strong generalization across multiple Earth observation tasks, but their robustness under real-world perturbations remains underexplored. To bridge this gap, we introduce REOBench, the first comprehensive benchmark for evaluating the robustness of Earth observation foundation models across six tasks and twelve types of image corruptions, including both appearance-based and geometric perturbations. To ensure realistic and fine-grained evaluation, our benchmark focuses on high-resolution optical remote sensing images, which are widely used in critical applications such as urban planning and disaster response. We conduct a systematic evaluation of a broad range of models trained using masked image modeling, contrastive learning, and vision-language pre-training paradigms. Our results reveal that (1) existing Earth observation foundation models experience significant performance degradation when exposed to input corruptions. (2) The severity of degradation varies across tasks, model architectures, backbone sizes, and types of corruption, with performance drop varying from less than 1% to over 25%. (3) Vision-language models show enhanced robustness, particularly in multimodal tasks. REOBench underscores the vulnerability of current Earth observation foundation models to real-world corruptions and provides actionable insights for developing more robust and reliable models.

Cite

Text

Li et al. "REOBench: Benchmarking Robustness of Earth Observation Foundation Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Li et al. "REOBench: Benchmarking Robustness of Earth Observation Foundation Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/li2025neurips-reobench/)

BibTeX

@inproceedings{li2025neurips-reobench,
  title     = {{REOBench: Benchmarking Robustness of Earth Observation Foundation Models}},
  author    = {Li, Xiang and Tao, Yong and Zhang, Siyuan and Liu, Siwei and Xiong, Zhitong and Luo, Chunbo and Liu, Lu and Pechenizkiy, Mykola and Zhu, Xiao Xiang and Huang, Tianjin},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/li2025neurips-reobench/}
}