Improving Domain Generalization with Interpolation Robustness
Abstract
We address domain generalization (DG) by viewing the underlying distributional shift as performing interpolation between domains. We devise an algorithm to learn a representation that is robustly invariant under such interpolation and term it as interpolation robustness. We investigate the failure aspect of DG algorithms when availability of training data is scarce. Through extensive experiments, we show that our approach significantly outperforms the recent state-of-the-art algorithm DIRT and the baseline DeepAll on average across different sizes of data on PACS and VLCS datasets.
Cite
Text
Palakkadavath et al. "Improving Domain Generalization with Interpolation Robustness." NeurIPS 2022 Workshops: INTERPOLATE, 2022.Markdown
[Palakkadavath et al. "Improving Domain Generalization with Interpolation Robustness." NeurIPS 2022 Workshops: INTERPOLATE, 2022.](https://mlanthology.org/neuripsw/2022/palakkadavath2022neuripsw-improving-a/)BibTeX
@inproceedings{palakkadavath2022neuripsw-improving-a,
title = {{Improving Domain Generalization with Interpolation Robustness}},
author = {Palakkadavath, Ragja and Nguyen-Tang, Thanh and Gupta, Sunil and Venkatesh, Svetha},
booktitle = {NeurIPS 2022 Workshops: INTERPOLATE},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/palakkadavath2022neuripsw-improving-a/}
}