Integrated Latent Heterogeneity and Invariance Learning in Kernel Space

Abstract

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose to find causally invariant relationships with multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many invariant learning methods inapplicable. In this paper, we propose Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves both the latent heterogeneity exploration and invariant learning in kernel space, and then gives feedback to the original neural network by appointing invariant gradient direction. We theoretically justify our algorithm and empirically validate the effectiveness of our algorithm with extensive experiments.

Cite

Text

Liu et al. "Integrated Latent Heterogeneity and Invariance Learning in Kernel Space." Neural Information Processing Systems, 2021.

Markdown

[Liu et al. "Integrated Latent Heterogeneity and Invariance Learning in Kernel Space." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/liu2021neurips-integrated/)

BibTeX

@inproceedings{liu2021neurips-integrated,
  title     = {{Integrated Latent Heterogeneity and Invariance Learning in Kernel Space}},
  author    = {Liu, Jiashuo and Hu, Zheyuan and Cui, Peng and Li, Bo and Shen, Zheyan},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/liu2021neurips-integrated/}
}