Causal Balancing for Domain Generalization

Abstract

While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations. We propose a balanced mini-batch sampling strategy to transform a biased data distribution into a spurious-free balanced distribution, based on the invariance of the underlying causal mechanisms for the data generation process. We argue that the Bayes optimal classifiers trained on such balanced distribution are minimax optimal across a diverse enough environment space. We also provide an identifiability guarantee of the latent variable model of the proposed data generation process, when utilizing enough train environments. Experiments are conducted on DomainBed, demonstrating empirically that our method obtains the best performance across 20 baselines reported on the benchmark.

Cite

Text

Wang et al. "Causal Balancing for Domain Generalization." International Conference on Learning Representations, 2023.

Markdown

[Wang et al. "Causal Balancing for Domain Generalization." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wang2023iclr-causal/)

BibTeX

@inproceedings{wang2023iclr-causal,
  title     = {{Causal Balancing for Domain Generalization}},
  author    = {Wang, Xinyi and Saxon, Michael and Li, Jiachen and Zhang, Hongyang and Zhang, Kun and Wang, William Yang},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/wang2023iclr-causal/}
}