RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events
Abstract
Remote sensing is critical for disaster monitoring, yet existing datasets lack temporal image pairs and detailed textual annotations. While single-snapshot imagery dominates current resources, it fails to capture dynamic disaster impacts over time. To address this gap, we introduce the Remote Sensing Change Caption (RSCC) dataset, a large-scale benchmark comprising 62,351 pre-/post-disaster image pairs (spanning earthquakes, floods, wildfires, and more) paired with rich, human-like change captions. By bridging the temporal and semantic divide in remote sensing data, RSCC enables robust training and evaluation of vision-language models for disaster-aware bi-temporal understanding. Our results highlight RSCC’s ability to facilitate detailed disaster-related analysis, paving the way for more accurate, interpretable, and scalable vision-language applications in remote sensing. Code and dataset are available at https://github.com/Bili-Sakura/RSCC.
Cite
Text
Chen et al. "RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-rscc/)BibTeX
@inproceedings{chen2025neurips-rscc,
title = {{RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events}},
author = {Chen, Zhenyuan and Wang, Chenxi and Zhang, Ningyu and Zhang, Feng},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-rscc/}
}