Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
Abstract
We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).
Cite
Text
Xue et al. "Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees." Neural Information Processing Systems, 2022.Markdown
[Xue et al. "Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/xue2022neurips-calibrated/)BibTeX
@inproceedings{xue2022neurips-calibrated,
title = {{Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees}},
author = {Xue, Songkai and Sun, Yuekai and Yurochkin, Mikhail},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/xue2022neurips-calibrated/}
}