Cross-Modal Fine-Tuning: Align Then Refine
Abstract
Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP. However, similar gains have not been observed in many other modalities due to a lack of relevant pretrained models. In this work, we propose ORCA, a general cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities. ORCA adapts to a target task via an align-then-refine workflow: given the target input, ORCA first learns an embedding network that aligns the embedded feature distribution with the pretraining modality. The pretrained model is then fine-tuned on the embedded data to exploit the knowledge shared across modalities. Through extensive experiments, we show that ORCA obtains state-of-the-art results on 3 benchmarks containing over 60 datasets from 12 modalities, outperforming a wide range of hand-designed, AutoML, general-purpose, and task-specific cross-modal methods. We highlight the importance of data alignment via a series of ablation studies and exemplify ORCA’s utility in data-limited regimes.
Cite
Text
Shen et al. "Cross-Modal Fine-Tuning: Align Then Refine." International Conference on Machine Learning, 2023.Markdown
[Shen et al. "Cross-Modal Fine-Tuning: Align Then Refine." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/shen2023icml-crossmodal/)BibTeX
@inproceedings{shen2023icml-crossmodal,
title = {{Cross-Modal Fine-Tuning: Align Then Refine}},
author = {Shen, Junhong and Li, Liam and Dery, Lucio M. and Staten, Corey and Khodak, Mikhail and Neubig, Graham and Talwalkar, Ameet},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {31030-31056},
volume = {202},
url = {https://mlanthology.org/icml/2023/shen2023icml-crossmodal/}
}