Certifiable Out-of-Distribution Generalization
Abstract
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiable out-of-distribution generalization method that provides provable OoD generalization performance guarantees via a functional optimization framework leveraging random distributions and max-margin learning for each input datum. With this approach, the proposed algorithmic scheme can provide certified accuracy for each input datum's prediction on the semantic space and achieves better performance simultaneously on OoD datasets dominated by correlation shifts or diversity shifts. Our code is available at https://github.com/ZlatanWilliams/StochasticDisturbanceLearning.
Cite
Text
Ye et al. "Certifiable Out-of-Distribution Generalization." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I9.26295Markdown
[Ye et al. "Certifiable Out-of-Distribution Generalization." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/ye2023aaai-certifiable/) doi:10.1609/AAAI.V37I9.26295BibTeX
@inproceedings{ye2023aaai-certifiable,
title = {{Certifiable Out-of-Distribution Generalization}},
author = {Ye, Nanyang and Zhu, Lin and Wang, Jia and Zeng, Zhaoyu and Shao, Jiayao and Peng, Chensheng and Pan, Bikang and Li, Kaican and Zhu, Jun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {10927-10935},
doi = {10.1609/AAAI.V37I9.26295},
url = {https://mlanthology.org/aaai/2023/ye2023aaai-certifiable/}
}