Learning Causal Semantic Representation for Out-of-Distribution Prediction

Abstract

Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately, and develop methods for OOD prediction from a single training domain, which is common and challenging. The methods are based on the causal invariance principle, with a novel design in variational Bayes for both efficient learning and easy prediction. Theoretically, we prove that under certain conditions, CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error and the success of adaptation. Empirical study shows improved OOD performance over prevailing baselines.

Cite

Text

Liu et al. "Learning Causal Semantic Representation for Out-of-Distribution Prediction." Neural Information Processing Systems, 2021.

Markdown

[Liu et al. "Learning Causal Semantic Representation for Out-of-Distribution Prediction." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/liu2021neurips-learning-a/)

BibTeX

@inproceedings{liu2021neurips-learning-a,
  title     = {{Learning Causal Semantic Representation for Out-of-Distribution Prediction}},
  author    = {Liu, Chang and Sun, Xinwei and Wang, Jindong and Tang, Haoyue and Li, Tao and Qin, Tao and Chen, Wei and Liu, Tie-Yan},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/liu2021neurips-learning-a/}
}