Chouldechova, Alexandra

11 publications

NeurIPS 2025 Comparison Requires Valid Measurement: Rethinking Attack Success Rate Comparisons in AI Red Teaming Alexandra Chouldechova, A. Feder Cooper, Solon Barocas, Abhinav Palia, Dan Vann, Hanna Wallach
NeurIPS 2025 Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research A. Feder Cooper, Christopher A. Choquette-Choo, Miranda Bogen, Kevin Klyman, Matthew Jagielski, Katja Filippova, Ken Liu, Alexandra Chouldechova, Jamie Hayes, Yangsibo Huang, Eleni Triantafillou, Peter Kairouz, Nicole Elyse Mitchell, Niloofar Mireshghallah, Abigail Z. Jacobs, James Grimmelmann, Vitaly Shmatikov, Christopher De Sa, Ilia Shumailov, Andreas Terzis, Solon Barocas, Jennifer Wortman Vaughan, Danah Boyd, Yejin Choi, Sanmi Koyejo, Fernando Delgado, Percy Liang, Daniel E. Ho, Pamela Samuelson, Miles Brundage, David Bau, Seth Neel, Hanna Wallach, Amy B. Cyphert, Mark Lemley, Nicolas Papernot, Katherine Lee
ICML 2025 Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge Hanna Wallach, Meera Desai, A. Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Nicholas J Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs
NeurIPS 2025 Validating LLM-as-a-Judge Systems Under Rating Indeterminacy Luke Guerdan, Solon Barocas, Ken Holstein, Hanna Wallach, Steven Wu, Alexandra Chouldechova
NeurIPSW 2024 AI Red Teaming Through the Lens of Measurement Theory Alexandra Chouldechova, A. Feder Cooper, Abhinav Palia, Dan Vann, Chad Atalla, Hannah Washington, Emily Sheng, Hanna Wallach
NeurIPS 2024 SureMap: Simultaneous Mean Estimation for Single-Task and Multi-Task Disaggregated Evaluation Mikhail Khodak, Lester Mackey, Alexandra Chouldechova, Miroslav Dudík
ECCV 2022 Unsupervised and Semi-Supervised Bias Benchmarking in Face Recognition Alexandra Chouldechova, Siqi Deng, Yongxin Wang, Wei Xia, Pietro Perona
ICML 2021 Characterizing Fairness over the Set of Good Models Under Selective Labels Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
NeurIPS 2020 Counterfactual Predictions Under Runtime Confounding Amanda Coston, Edward Kennedy, Alexandra Chouldechova
AISTATS 2020 Fairness Evaluation in Presence of Biased Noisy Labels Riccardo Fogliato, Alexandra Chouldechova, Max G’Sell
NeurIPS 2018 Does Mitigating ML's Impact Disparity Require Treatment Disparity? Zachary Lipton, Julian McAuley, Alexandra Chouldechova