A Compliance Checking Framework for DNN Models

Abstract

Growing awareness towards ethical use of machine learning (ML) models has created a surge for the development of fair models. Existing work in this regard assumes the presence of sensitive attributes in the data and hence can build classifiers whose decisions remain agnostic to such attributes. However, in the real world settings, the end-user of the ML model is unaware of the training data; besides, building custom models is not always feasible. Moreover, utilizing a pre-trained model with high accuracy on certain dataset can not be assumed to be fair. Unknown biases in the training data are the true culprit for unfair models (i.e., disparate performance for groups in the dataset). In this preliminary research, we propose a different lens for building fair models by enabling the user with tools to discover blind spots and biases in a pre-trained model and augment them with corrective measures.

Cite

Text

Verma et al. "A Compliance Checking Framework for DNN Models." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/924

Markdown

[Verma et al. "A Compliance Checking Framework for DNN Models." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/verma2019ijcai-compliance/) doi:10.24963/IJCAI.2019/924

BibTeX

@inproceedings{verma2019ijcai-compliance,
  title     = {{A Compliance Checking Framework for DNN Models}},
  author    = {Verma, Sunny and Wang, Chen and Zhu, Liming and Liu, Wei},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {6470-6471},
  doi       = {10.24963/IJCAI.2019/924},
  url       = {https://mlanthology.org/ijcai/2019/verma2019ijcai-compliance/}
}