AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models
Abstract
We present AstroCLiP, a strategy to facilitate the construction of astronomical foundation models that bridge the gap between diverse astronomical observational modalities. We demonstrate that a cross-modal contrastive learning approach between images and spectra of galaxies yields highly informative embeddings of both modalities. In particular, we apply our method on multi-band images and spectrograms from the Dark Energy Spectroscopic Instrument (DESI), and show that: (1) these embeddings are well-aligned between modalities and can be used for accurate cross-modal searches, and (2) these embeddings encode valuable physical information about the galaxies - in particular redshift and stellar mass - that can be used to achieve competitive zero- and few- shot predictions without further finetuning. Additionally, in the process of developing our approach, we also construct a novel, transformer-based model and pretraining approach for galaxy spectra.
Cite
Text
Lanusse et al. "AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models." NeurIPS 2023 Workshops: AI4Science, 2023.Markdown
[Lanusse et al. "AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models." NeurIPS 2023 Workshops: AI4Science, 2023.](https://mlanthology.org/neuripsw/2023/lanusse2023neuripsw-astroclip/)BibTeX
@inproceedings{lanusse2023neuripsw-astroclip,
title = {{AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models}},
author = {Lanusse, Francois and Parker, Liam Holden and Golkar, Siavash and Bietti, Alberto and Cranmer, Miles and Eickenberg, Michael and Krawezik, Geraud and McCabe, Michael and Ohana, Ruben and Pettee, Mariel and Blancard, Bruno Régaldo-Saint and Tesileanu, Tiberiu and Cho, Kyunghyun and Ho, Shirley},
booktitle = {NeurIPS 2023 Workshops: AI4Science},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/lanusse2023neuripsw-astroclip/}
}