Krishna, Satyapriya

13 publications

ICLR 2025 More RLHF, More Trust? on the Impact of Preference Alignment on Trustworthiness Aaron Jiaxun Li, Satyapriya Krishna, Himabindu Lakkaraju
TMLR 2025 Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) Apurv Verma, Satyapriya Krishna, Sebastian Gehrmann, Madhavan Seshadri, Anu Pradhan, John A. Doucette, David Rabinowitz, Leslie Barrett, Tom Ault, Hai Phan
NeurIPS 2024 Croissant: A Metadata Format for ML-Ready Datasets Mubashara Akhtar, Omar Benjelloun, Costanza Conforti, Luca Foschini, Pieter Gijsbers, Joan Giner-Miguelez, Sujata Goswami, Nitisha Jain, Michalis Karamousadakis, Satyapriya Krishna, Michael Kuchnik, Sylvain Lesage, Quentin Lhoest, Pierre Marcenac, Manil Maskey, Peter Mattson, Luis Oala, Hamidah Oderinwale, Pierre Ruyssen, Tim Santos, Rajat Shinde, Elena Simperl, Arjun Suresh, Goeffry Thomas, Slava Tykhonov, Joaquin Vanschoren, Susheel Varma, Jos van der Velde, Steffen Vogler, Carole-Jean Wu, Luyao Zhang
TMLR 2024 The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective Satyapriya Krishna, Tessa Han, Alex Gu, Steven Wu, Shahin Jabbari, Himabindu Lakkaraju
ICML 2024 Understanding the Effects of Iterative Prompting on Truthfulness Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
NeurIPSW 2023 Are Large Language Models Post Hoc Explainers? Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
NeurIPSW 2023 Are Large Language Models Post Hoc Explainers? Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
NeurIPS 2023 Post Hoc Explanations of Language Models Can Improve Language Models Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju
ICML 2023 Towards Bridging the Gaps Between the Right to Explanation and the Right to Be Forgotten Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju
NeurIPSW 2022 On the Impact of Adversarially Robust Models on Algorithmic Recourse Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
NeurIPS 2022 OpenXAI: Towards a Transparent Evaluation of Model Explanations Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju
ICLRW 2022 Rethinking Stability for Attribution-Based Explanations Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju
NeurIPSW 2022 TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations Dylan Z Slack, Satyapriya Krishna, Himabindu Lakkaraju, Sameer Singh