Explanations for Monotonic Classifiers.

Abstract

In many classification tasks there is a requirement of monotonicity. Concretely, if all else remains constant, increasing (resp. decreasing) the value of one or more features must not decrease (resp. increase) the value of the prediction. Despite comprehensive efforts on learning monotonic classifiers, dedicated approaches for explaining monotonic classifiers are scarce and classifier-specific. This paper describes novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier. These novel algorithms are polynomial (indeed linear) in the run time complexity of the classifier. Furthermore, the paper presents a practically efficient model-agnostic algorithm for enumerating formal explanations.

Cite

Text

Marques-Silva et al. "Explanations for Monotonic Classifiers.." International Conference on Machine Learning, 2021.

Markdown

[Marques-Silva et al. "Explanations for Monotonic Classifiers.." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/marquessilva2021icml-explanations/)

BibTeX

@inproceedings{marquessilva2021icml-explanations,
  title     = {{Explanations for Monotonic Classifiers.}},
  author    = {Marques-Silva, Joao and Gerspacher, Thomas and Cooper, Martin C and Ignatiev, Alexey and Narodytska, Nina},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {7469-7479},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/marquessilva2021icml-explanations/}
}