Robust yet Efficient Conformal Prediction Sets
Abstract
Conformal prediction (CP) can convert any model’s output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels).
Cite
Text
Zargarbashi et al. "Robust yet Efficient Conformal Prediction Sets." International Conference on Machine Learning, 2024.Markdown
[Zargarbashi et al. "Robust yet Efficient Conformal Prediction Sets." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/hzargarbashi2024icml-robust/)BibTeX
@inproceedings{hzargarbashi2024icml-robust,
title = {{Robust yet Efficient Conformal Prediction Sets}},
author = {Zargarbashi, Soroush H. and Akhondzadeh, Mohammad Sadegh and Bojchevski, Aleksandar},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {17123-17147},
volume = {235},
url = {https://mlanthology.org/icml/2024/hzargarbashi2024icml-robust/}
}