VFA: Vision Frequency Analysis of Foundation Models and Human

Abstract

Machine learning models often struggle with distribution shifts in real-world scenarios, whereas humans exhibit robust adaptation. Models that better align with human perception may achieve higher out-of-distribution generalization. In this study, we investigate how various characteristics of large-scale computer vision models influence their alignment with human capabilities and robustness. Our findings indicate that increasing model and data size, along with incorporating rich semantic information and multiple modalities, significantly enhances models' alignment with human perception and their overall robustness. Our empirical analysis demonstrates a strong correlation between out-of-distribution accuracy and human alignment.

Cite

Text

Bayazi et al. "VFA: Vision Frequency Analysis of Foundation Models and Human." ICML 2024 Workshops: FM-Wild, 2024.

Markdown

[Bayazi et al. "VFA: Vision Frequency Analysis of Foundation Models and Human." ICML 2024 Workshops: FM-Wild, 2024.](https://mlanthology.org/icmlw/2024/bayazi2024icmlw-vfa/)

BibTeX

@inproceedings{bayazi2024icmlw-vfa,
  title     = {{VFA: Vision Frequency Analysis of Foundation Models and Human}},
  author    = {Bayazi, Mohammad Javad Darvishi and Arefin, Md Rifat and Faubert, Jocelyn and Rish, Irina},
  booktitle = {ICML 2024 Workshops: FM-Wild},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/bayazi2024icmlw-vfa/}
}