An Analysis of Model Robustness Across Concurrent Distribution Shifts
Abstract
Machine learning models, meticulously optimized for source data, often fail to predict target data when faced with distribution shifts (DSs). Previous benchmarking studies, though extensive, have mainly focused on simple DSs. Recognizing that DSs often occur in more complex forms in real-world scenarios, we broaden our study to include multiple concurrent shifts, such as unseen domain shifts combined with spurious correlations. We evaluate 26 algorithms that range from simple heuristic augmentations to zero-shot inference using foundation models, across 168 source-target pairs from eight datasets. Our analysis of over 100K models reveals that (i) concurrent DSs typically worsen performance compared to a single shift, with certain exceptions, (ii) if a model improves generalization for one distribution shift, it tends to be effective for others, (iii) heuristic data augmentations achieve the best overall performance on both synthetic and real-world datasets.
Cite
Text
Jeon et al. "An Analysis of Model Robustness Across Concurrent Distribution Shifts." Transactions on Machine Learning Research, 2025.Markdown
[Jeon et al. "An Analysis of Model Robustness Across Concurrent Distribution Shifts." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/jeon2025tmlr-analysis/)BibTeX
@article{jeon2025tmlr-analysis,
title = {{An Analysis of Model Robustness Across Concurrent Distribution Shifts}},
author = {Jeon, Myeongho and Choi, Suhwan and Lee, Hyoje and Yeo, Teresa},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/jeon2025tmlr-analysis/}
}