ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers Under Domain Shifts

Abstract

Parameter-efficient fine-tuning (PEFT) techniques such as low-rank adaptation (LoRA) can effectively adapt large pre-trained foundation models to downstream tasks using only a small fraction (0.1%-10%) of the original trainable weights. An under-explored question of PEFT is in extending the pre-training phase without supervised labels; that is, can we adapt a pre-trained foundation model to a new domain via efficient self-supervised pre-training on this domain? In this work, we introduce ExPLoRA, a highly effective technique to improve transfer learning of pre-trained vision transformers (ViTs) under domain shifts. Initializing a ViT with pre-trained weights on large, natural-image datasets such as from DinoV2 or MAE, ExPLoRA continues the unsupervised pre-training objective on a new domain, unfreezing 1-2 pre-trained ViT blocks and tuning all other layers with LoRA. We then fine-tune the resulting model only with LoRA on this new domain for supervised learning. Our experiments demonstrate state-of-the-art results on satellite imagery, even outperforming fully pre-training and fine-tuning ViTs. Using the DinoV2 training objective, we demonstrate up to 8% improvement in linear probing top-1 accuracy on downstream tasks while using $<$10% of the number of parameters that are used in prior fully-tuned state-of-the art approaches. Our ablation studies confirm the efficacy of our approach over other baselines such as PEFT. Code is available at: https://samar-khanna.github.io/ExPLoRA/

Cite

Text

Khanna et al. "ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers Under Domain Shifts." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Khanna et al. "ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers Under Domain Shifts." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/khanna2025icml-explora/)

BibTeX

@inproceedings{khanna2025icml-explora,
  title     = {{ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers Under Domain Shifts}},
  author    = {Khanna, Samar and Irgau, Medhanie and Lobell, David B. and Ermon, Stefano},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {29799-29818},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/khanna2025icml-explora/}
}