Predicting Out-of-Distribution Error with Confidence Optimal Transport
Abstract
Out-of-distribution (OOD) data poses serious challenges in deployed machine learning models as even subtle changes could incur significant performance drops. Being able to estimate a model's performance on test data is important in practice as it indicates when to trust a model's decisions. We present a simple yet effective method to predict a model's performance on an unknown distribution without any additional annotation. Our approach is rooted in the Optimal Transport theory, viewing test samples' output softmax scores from deep neural networks as empirical samples from an unknown distribution. We show that our method, Confidence Optimal Transport (COT), provides robust estimates of a model's performance on a target domain. Despite its simplicity, our method achieves state-of-the-art results on three benchmark datasets and outperforms existing methods by a large margin.
Cite
Text
Lu et al. "Predicting Out-of-Distribution Error with Confidence Optimal Transport." ICLR 2023 Workshops: Trustworthy_ML, 2023.Markdown
[Lu et al. "Predicting Out-of-Distribution Error with Confidence Optimal Transport." ICLR 2023 Workshops: Trustworthy_ML, 2023.](https://mlanthology.org/iclrw/2023/lu2023iclrw-predicting/)BibTeX
@inproceedings{lu2023iclrw-predicting,
title = {{Predicting Out-of-Distribution Error with Confidence Optimal Transport}},
author = {Lu, Yuzhe and Wang, Zhenlin and Zhai, Runtian and Kolouri, Soheil and Campbell, Joseph and Sycara, Katia P.},
booktitle = {ICLR 2023 Workshops: Trustworthy_ML},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/lu2023iclrw-predicting/}
}