Scaling Channel-Adaptive Self-Supervised Learning
Abstract
Recent advances in self-supervised pre-training of foundation models for natural images have made them a popular choice for various visual systems and applications. Self-supervised strategies are also promising in non-RGB scientific imaging domains such as in biology, medical and satellite imagery, but their broader application is hampered by heterogeneity in channel composition and semantics between relevant datasets: two datasets may contain different numbers of channels, and these may reveal distinct aspects of an object or scene. Recent works on channel adaptive strategies report substantial advantages for those that account for variable channel compositions without sacrificing the ability to jointly encode channels; yet, how these strategies behave at scale remains unclear. We here show that, surprisingly, trained across large-scale datasets, independent-encoding of channels outperforms joint-encoding methods by a substantial margin. We validate this result along an extensive set of experiments on various datasets from cell microscopy to geospatial imagery. Our DINO BoC approach sets a new state-of-the-art across challenging benchmarks, including generalization to out-of-distribution tasks and unseen channel combinations at test time. We will open source the code, along with model weights that constitute a new general purpose feature extractor for fluorescent microscopy.
Cite
Text
De Lorenci et al. "Scaling Channel-Adaptive Self-Supervised Learning." Transactions on Machine Learning Research, 2025.Markdown
[De Lorenci et al. "Scaling Channel-Adaptive Self-Supervised Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/lorenci2025tmlr-scaling/)BibTeX
@article{lorenci2025tmlr-scaling,
title = {{Scaling Channel-Adaptive Self-Supervised Learning}},
author = {De Lorenci, Alice V. and Yi, Seung Eun and Moutakanni, Théo and Bojanowski, Piotr and Couprie, Camille and Caicedo, Juan C. and Pernice, Wolfgang Maximilian Anton},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/lorenci2025tmlr-scaling/}
}