Lakkaraju, Himabindu
70 publications
ICMLW
2023
Accurate, Explainable, and Private Models: Providing Recourse While Minimizing Training Data Leakage
NeurIPS
2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
NeurIPS
2023
Which Models Have Perceptually-Aligned Gradients? an Explanation via Off-Manifold Robustness
ICMLW
2023
Which Models Have Perceptually-Aligned Gradients? an Explanation via Off-Manifold Robustness
AISTATS
2022
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis
AISTATS
2022
Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods
NeurIPSW
2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
NeurIPS
2022
Which Explanation Should I Choose? a Function Approximation Perspective to Characterizing Post Hoc Explanations
NeurIPS
2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses