ML Anthology
Authors
Search
About
Pawelczyk, Martin
16 publications
NeurIPS
2025
Efficiently Verifiable Proofs of Data Attribution
Ari Karchmer
,
Seth Neel
,
Martin Pawelczyk
ICLR
2025
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
,
Jimmy Z. Di
,
Yiwei Lu
,
Gautam Kamath
,
Ayush Sekhari
,
Seth Neel
ICMLW
2024
Explaining the Model, Protecting Your Data: Revealing and Mitigating the Data Privacy Risks of Post-Hoc Model Explanations via Membership Inference
Catherine Huang
,
Martin Pawelczyk
,
Himabindu Lakkaraju
AAAI
2024
I Prefer Not to Say: Protecting User Consent in Models with Optional Personal Data
Tobias Leemann
,
Martin Pawelczyk
,
Christian Thomas Eberle
,
Gjergji Kasneci
ICML
2024
In-Context Unlearning: Language Models as Few-Shot Unlearners
Martin Pawelczyk
,
Seth Neel
,
Himabindu Lakkaraju
ICMLW
2024
On the Privacy Risks of Post-Hoc Explanations of Foundation Models
Catherine Huang
,
Martin Pawelczyk
,
Himabindu Lakkaraju
NeurIPS
2023
Gaussian Membership Inference Privacy
Tobias Leemann
,
Martin Pawelczyk
,
Gjergji Kasneci
ICLR
2023
Language Models Are Realistic Tabular Data Generators
Vadim Borisov
,
Kathrin Sessler
,
Tobias Leemann
,
Martin Pawelczyk
,
Gjergji Kasneci
AISTATS
2023
On the Privacy Risks of Algorithmic Recourse
Martin Pawelczyk
,
Himabindu Lakkaraju
,
Seth Neel
ICLR
2023
On the Trade-Off Between Actionable Explanations and the Right to Be Forgotten
Martin Pawelczyk
,
Tobias Leemann
,
Asia Biega
,
Gjergji Kasneci
ICLR
2023
Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse
Martin Pawelczyk
,
Teresa Datta
,
Johan Van den Heuvel
,
Gjergji Kasneci
,
Himabindu Lakkaraju
AISTATS
2022
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis
Martin Pawelczyk
,
Chirag Agarwal
,
Shalmali Joshi
,
Sohini Upadhyay
,
Himabindu Lakkaraju
NeurIPSW
2022
On the Trade-Off Between Actionable Explanations and the Right to Be Forgotten
Martin Pawelczyk
,
Tobias Leemann
,
Asia Biega
,
Gjergji Kasneci
NeurIPS
2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
,
Satyapriya Krishna
,
Eshika Saxena
,
Martin Pawelczyk
,
Nari Johnson
,
Isha Puri
,
Marinka Zitnik
,
Himabindu Lakkaraju
ICLRW
2022
Rethinking Stability for Attribution-Based Explanations
Chirag Agarwal
,
Nari Johnson
,
Martin Pawelczyk
,
Satyapriya Krishna
,
Eshika Saxena
,
Marinka Zitnik
,
Himabindu Lakkaraju
UAI
2020
On Counterfactual Explanations Under Predictive Multiplicity
Martin Pawelczyk
,
Klaus Broelemann
,
Gjergji. Kasneci