Federated Fine-Tuning of Vision Foundation Models via Probabilistic Masking
Abstract
Foundation Models (FMs) have revolutionized machine learning with their adaptability and high performance across tasks; yet, their integration into Federated Learning (FL) is challenging due to substantial communication overhead from their extensive parameterization. We present DeltaMask, a novel method that efficiently fine-tunes FMs in FL at an ultra-low bitrate, well below $1$ bpp. DeltaMask employs stochastic masking to detect highly effective subnetworks within FMs and leverage stochasticity and sparsity in client masks to compress updates into a compact grayscale image using probabilistic filters, deviating from traditional weight training approaches. Our comprehensive evaluations across various datasets and architectures demonstrate DeltaMask efficiently achieves bitrates as low as $0.09$ bpp, enhancing communication efficiency while maintaining FMs performance, as measured on $8$ datasets and $5$ pre-trained models of various network architectures.
Cite
Text
Tsouvalas et al. "Federated Fine-Tuning of Vision Foundation Models via Probabilistic Masking." ICML 2024 Workshops: FM-Wild, 2024.Markdown
[Tsouvalas et al. "Federated Fine-Tuning of Vision Foundation Models via Probabilistic Masking." ICML 2024 Workshops: FM-Wild, 2024.](https://mlanthology.org/icmlw/2024/tsouvalas2024icmlw-federated/)BibTeX
@inproceedings{tsouvalas2024icmlw-federated,
title = {{Federated Fine-Tuning of Vision Foundation Models via Probabilistic Masking}},
author = {Tsouvalas, Vasileios and Asano, Yuki M and Saeed, Aaqib},
booktitle = {ICML 2024 Workshops: FM-Wild},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/tsouvalas2024icmlw-federated/}
}