Heterogeneous Risk Minimization
Abstract
Machine learning algorithms with empirical risk minimization usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts. Recently, some invariant learning methods for out-of-distribution (OOD) generalization have been proposed by leveraging multiple training environments to find invariant relationships. However, modern datasets are frequently assembled by merging data from multiple sources without explicit source labels. The resultant unobserved heterogeneity renders many invariant learning methods inapplicable. In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts. We theoretically characterize the roles of the environment labels in invariant learning and justify our newly proposed HRM framework. Extensive experimental results validate the effectiveness of our HRM framework.
Cite
Text
Liu et al. "Heterogeneous Risk Minimization." International Conference on Machine Learning, 2021.Markdown
[Liu et al. "Heterogeneous Risk Minimization." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/liu2021icml-heterogeneous/)BibTeX
@inproceedings{liu2021icml-heterogeneous,
title = {{Heterogeneous Risk Minimization}},
author = {Liu, Jiashuo and Hu, Zheyuan and Cui, Peng and Li, Bo and Shen, Zheyan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {6804-6814},
volume = {139},
url = {https://mlanthology.org/icml/2021/liu2021icml-heterogeneous/}
}