Explaining the Explainer: A First Theoretical Analysis of LIME

Abstract

Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the first theoretical analysis of LIME. We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear. The good news is that these coefficients are proportional to the gradient of the function to explain: LIME indeed discovers meaningful features. However, our analysis also reveals that poor choices of parameters can lead LIME to miss important features.

Cite

Text

Garreau and Luxburg. "Explaining the Explainer: A First Theoretical Analysis of LIME." Artificial Intelligence and Statistics, 2020.

Markdown

[Garreau and Luxburg. "Explaining the Explainer: A First Theoretical Analysis of LIME." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/garreau2020aistats-explaining/)

BibTeX

@inproceedings{garreau2020aistats-explaining,
  title     = {{Explaining the Explainer: A First Theoretical Analysis of LIME}},
  author    = {Garreau, Damien and Luxburg, Ulrike},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {1287-1296},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/garreau2020aistats-explaining/}
}