Certifying Robustness to Programmable Data Bias in Decision Trees
Abstract
Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to certify that models produced by a learning algorithm are pointwise-robust to dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction. We focus on decision-tree learning due to the interpretable nature of the models. Our approach allows programmatically specifying \emph{bias models} across a variety of dimensions (e.g., label-flipping or missing data), composing types of bias, and targeting bias towards a specific group. To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.
Cite
Text
Meyer et al. "Certifying Robustness to Programmable Data Bias in Decision Trees." Neural Information Processing Systems, 2021.Markdown
[Meyer et al. "Certifying Robustness to Programmable Data Bias in Decision Trees." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/meyer2021neurips-certifying/)BibTeX
@inproceedings{meyer2021neurips-certifying,
title = {{Certifying Robustness to Programmable Data Bias in Decision Trees}},
author = {Meyer, Anna and Albarghouthi, Aws and D'Antoni, Loris},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/meyer2021neurips-certifying/}
}