Sparse Mixture-of-Experts Are Domain Generalizable Learners

Abstract

In domain generalization (DG), most existing methods focused on the loss function design. This paper proposes to explore an orthogonal direction, i.e., the design of the backbone architecture. It is motivated by an empirical finding that transformer-based models trained with empirical risk minimization (ERM) outperform CNN-based models employing state-of-the-art (SOTA) DG algorithms on multiple DG datasets. We develop a formal framework to characterize a network's robustness to distribution shifts by studying its architecture's alignment with the correlations in the dataset. This analysis guides us to propose a novel DG model built upon vision transformers, namely \emph{Generalizable Mixture-of-Experts (GMoE)}. Experiments on DomainBed demonstrate that GMoE trained with ERM outperforms SOTA DG baselines by a large margin.

Cite

Text

Li et al. "Sparse Mixture-of-Experts Are Domain Generalizable Learners." NeurIPS 2022 Workshops: DistShift, 2022.

Markdown

[Li et al. "Sparse Mixture-of-Experts Are Domain Generalizable Learners." NeurIPS 2022 Workshops: DistShift, 2022.](https://mlanthology.org/neuripsw/2022/li2022neuripsw-sparse/)

BibTeX

@inproceedings{li2022neuripsw-sparse,
  title     = {{Sparse Mixture-of-Experts Are Domain Generalizable Learners}},
  author    = {Li, Bo and Shen, Yifei and Yang, Jingkang and Wang, Yezhen and Ren, Jiawei and Che, Tong and Zhang, Jun and Liu, Ziwei},
  booktitle = {NeurIPS 2022 Workshops: DistShift},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/li2022neuripsw-sparse/}
}