Diffusion-Guided Counterfactual Generation for Model Explainability

Abstract

Generating counterfactual explanations is one of the most effective approaches for uncovering the inner workings of black-box neural network models and building user trust. While remarkable strides have been made in generative modeling using diffusion models in domains like vision, their utility in generating counterfactual explanations in structured modalities remains unexplored. In this paper, we introduce Structured Counterfactual Diffuser or SCD, the first plug-and-play framework leveraging diffusion for generating counterfactual explanations in structured data. SCD learns the underlying data distribution via a diffusion model which is then guided at test time to generate counterfactuals for any arbitrary black-box model, input, and desired prediction. Our experiments show that our counterfactuals not only exhibit high plausibility compared to the existing state-of-the-art but also show significantly better proximity and diversity.

Cite

Text

Madaan and Bedathur. "Diffusion-Guided Counterfactual Generation for Model Explainability." NeurIPS 2023 Workshops: XAIA, 2023.

Markdown

[Madaan and Bedathur. "Diffusion-Guided Counterfactual Generation for Model Explainability." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/madaan2023neuripsw-diffusionguided/)

BibTeX

@inproceedings{madaan2023neuripsw-diffusionguided,
  title     = {{Diffusion-Guided Counterfactual Generation for Model Explainability}},
  author    = {Madaan, Nishtha and Bedathur, Srikanta},
  booktitle = {NeurIPS 2023 Workshops: XAIA},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/madaan2023neuripsw-diffusionguided/}
}