Neuron Explanations for Conformal Prediction (Student Abstract)
Abstract
Conformal prediction (CP) has gained prominence as a popular technique for uncertainty quantification in deep neural networks (DNNs), providing statistically rigorous uncertainty sets. However, existing CP methods fail to clarify the origins of predictive uncertainties. While neuron-level interpretability has been effective in revealing the internal mechanisms of DNNs, explaining CP at the neuron level remains unexplored. Nonetheless, generating neuron explanations for CP is challenging due to the discrete and non-differentiable characteristics of CP, and the labor-intensive process of semantic annotation. To address these limitations, this paper proposes a novel neuron explanation approach for CP by identifying neurons crucial for understanding predictive uncertainties and automatically generating semantic explanations. The effectiveness of the proposed method is validated through both qualitative and quantitative experiments.
Cite
Text
Lidder et al. "Neuron Explanations for Conformal Prediction (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35270Markdown
[Lidder et al. "Neuron Explanations for Conformal Prediction (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/lidder2025aaai-neuron/) doi:10.1609/AAAI.V39I28.35270BibTeX
@inproceedings{lidder2025aaai-neuron,
title = {{Neuron Explanations for Conformal Prediction (Student Abstract)}},
author = {Lidder, Divya and Morse, Kathryn and Sullivan, Bridget and Qian, Wei and Miao, Chenglin and Huai, Mengdi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29412-29414},
doi = {10.1609/AAAI.V39I28.35270},
url = {https://mlanthology.org/aaai/2025/lidder2025aaai-neuron/}
}