BELIEF - Bayesian Sign Entropy Regularization for LIME Framework
Abstract
Explanations of Local Interpretable Model-agnostic Explanations (LIME) are often inconsistent across different runs making them unreliable for eXplainable AI (XAI). The inconsistency stems from sign flips and variability in ranks of the segments for each different run. We propose a Bayesian Regularization approach to reduce sign flips, which in turn stabilizes feature rankings and ensures significantly higher consistency in explanations. The proposed approach enforces sparsity by incorporating a Sign Entropy prior on the coefficient distribution and dynamically eliminates features during optimization. Our results demonstrate that the explanations from the proposed method exhibit significantly better consistency and fidelity than LIME (and its earlier variants). Further, our approach exhibits comparable consistency and fidelity with a significantly lower execution time than the latest LIME variant, i.e., SLICE (CVPR 2024).
Cite
Text
Bora et al. "BELIEF - Bayesian Sign Entropy Regularization for LIME Framework." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.Markdown
[Bora et al. "BELIEF - Bayesian Sign Entropy Regularization for LIME Framework." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/bora2025uai-belief/)BibTeX
@inproceedings{bora2025uai-belief,
title = {{BELIEF - Bayesian Sign Entropy Regularization for LIME Framework}},
author = {Bora, Revoti Prasad and Terhörst, Philipp and Veldhuis, Raymond and Ramachandra, Raghavendra and Raja, Kiran},
booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
year = {2025},
pages = {332-354},
volume = {286},
url = {https://mlanthology.org/uai/2025/bora2025uai-belief/}
}