CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale
Abstract
Measuring biodiversity is crucial for understanding ecosystem health. While prior works have developed machine learning models for taxonomic classification of photographic images and DNA separately, in this work, we introduce a multi-modal approach combining both, using CLIP-style contrastive learning to align images, barcode DNA, and text-based representations of taxonomic labels in a unified embedding space. This allows for accurate classification of both known and unknown insect species without task-specific fine-tuning, leveraging contrastive learning for the first time to fuse DNA and image data. Our method surpasses previous single-modality approaches in accuracy by over 8% on zero-shot learning tasks, showcasing its effectiveness in biodiversity studies.
Cite
Text
Gong et al. "CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale." International Conference on Learning Representations, 2025.Markdown
[Gong et al. "CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/gong2025iclr-clibd/)BibTeX
@inproceedings{gong2025iclr-clibd,
title = {{CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale}},
author = {Gong, ZeMing and Wang, Austin and Huo, Xiaoliang and Haurum, Joakim Bruslund and Lowe, Scott C. and Taylor, Graham W. and Chang, Angel X},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/gong2025iclr-clibd/}
}