Contrastive Ground-Level Image and Remote Sensing Pre-Training Improves Representation Learning for Natural World Imagery
Abstract
Multimodal image-text contrastive learning has shown that joint representations can be learned across modalities. Here, we show how leveraging multiple views of image data with contrastive learning can improve downstream fine-grained classification performance for species recognition, even when one view is absent. We propose ContRastive Image-remote Sensing Pre-training (CRISP)—a new pre-training task for ground-level and aerial image representation learning of the natural world—and introduce Nature Multi-View (NMV), a dataset of natural world imagery including > 3 million ground-level and aerial image pairs for over 6,000 plant taxa across the ecologically diverse state of California. The NMV dataset and accompanying material are available at hf.co/datasets/ andyvhuynh/NatureMultiView.
Cite
Text
Huynh et al. "Contrastive Ground-Level Image and Remote Sensing Pre-Training Improves Representation Learning for Natural World Imagery." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72989-8_10Markdown
[Huynh et al. "Contrastive Ground-Level Image and Remote Sensing Pre-Training Improves Representation Learning for Natural World Imagery." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/huynh2024eccv-contrastive/) doi:10.1007/978-3-031-72989-8_10BibTeX
@inproceedings{huynh2024eccv-contrastive,
title = {{Contrastive Ground-Level Image and Remote Sensing Pre-Training Improves Representation Learning for Natural World Imagery}},
author = {Huynh, Andy V and Gillespie, Lauren and Lopez-Saucedo, Jael and Tang, Claire and Sikand, Rohan and Expósito-Alonso, Moisés},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72989-8_10},
url = {https://mlanthology.org/eccv/2024/huynh2024eccv-contrastive/}
}