Nasr, Milad

23 publications

ICML 2025 AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses Nicholas Carlini, Edoardo Debenedetti, Javier Rando, Milad Nasr, Florian Tramèr
ICML 2025 Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards Yangsibo Huang, Milad Nasr, Anastasios Nikolas Angelopoulos, Nicholas Carlini, Wei-Lin Chiang, Christopher A. Choquette-Choo, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Ken Liu, Ion Stoica, Florian Tramèr, Chiyuan Zhang
NeurIPS 2025 Exploring the Limits of Strong Membership Inference Attacks on Large Language Models Jamie Hayes, Ilia Shumailov, Christopher A. Choquette-Choo, Matthew Jagielski, Georgios Kaissis, Milad Nasr, Meenatchi Sundaram Muthu Selva Annamalai, Niloofar Mireshghallah, Igor Shilov, Matthieu Meeus, Yves-Alexandre de Montjoye, Katherine Lee, Franziska Boenisch, Adam Dziedzic, A. Feder Cooper
ICLR 2025 On Evaluating the Durability of Safeguards for Open-Weight LLMs Xiangyu Qi, Boyi Wei, Nicholas Carlini, Yangsibo Huang, Tinghao Xie, Luxi He, Matthew Jagielski, Milad Nasr, Prateek Mittal, Peter Henderson
ICLR 2025 Privacy Auditing of Large Language Models Ashwinee Panda, Xinyu Tang, Christopher A. Choquette-Choo, Milad Nasr, Prateek Mittal
TMLR 2025 Private Fine-Tuning of Large Language Models with Zeroth-Order Optimization Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal
ICLR 2025 Scalable Extraction of Training Data from Aligned, Production Language Models Milad Nasr, Javier Rando, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Florian Tramèr, Katherine Lee
ICLR 2025 The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD Milad Nasr, Thomas Steinke, Borja Balle, Christopher A. Choquette-Choo, Arun Ganesh, Matthew Jagielski, Jamie Hayes, Abhradeep Guha Thakurta, Adam Smith, Andreas Terzis
ICLR 2025 Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy Yangsibo Huang, Daogao Liu, Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Milad Nasr, Amer Sinha, Chiyuan Zhang
ICML 2024 Auditing Private Prediction Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr
ICMLW 2024 Privacy Auditing of Large Language Models Ashwinee Panda, Xinyu Tang, Milad Nasr, Christopher A. Choquette-Choo, Prateek Mittal
ICMLW 2024 Privacy Auditing of Large Language Models Ashwinee Panda, Xinyu Tang, Milad Nasr, Christopher A. Choquette-Choo, Prateek Mittal
ICMLW 2024 Private Fine-Tuning of Large Language Models with Zeroth-Order Optimization Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal
NeurIPS 2024 Query-Based Adversarial Prompt Generation Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, Milad Nasr
ICML 2024 Stealing Part of a Production Language Model Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
ICMLW 2023 Algorithms for Optimal Adaptation ofDiffusion Models to Reward Functions Krishnamurthy Dj Dvijotham, Shayegan Omidshafiei, Kimin Lee, Katherine M. Collins, Deepak Ramachandran, Adrian Weller, Mohammad Ghavamzadeh, Milad Nasr, Ying Fan, Jeremiah Zhe Liu
NeurIPS 2023 Are Aligned Neural Networks Adversarially Aligned? Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei W Koh, Daphne Ippolito, Florian Tramer, Ludwig Schmidt
ICML 2023 Effectively Using Public Data in Privacy Preserving Machine Learning Milad Nasr, Saeed Mahloujifar, Xinyu Tang, Prateek Mittal, Amir Houmansadr
NeurIPS 2023 Privacy Auditing with One (1) Training Run Thomas Steinke, Milad Nasr, Matthew Jagielski
ICMLW 2023 Privacy Auditing with One (1) Training Run Thomas Steinke, Milad Nasr, Matthew Jagielski
NeurIPS 2023 Students Parrot Their Teachers: Membership Inference on Model Distillation Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramer
ICML 2023 Why Is Public Pretraining Necessary for Private Model Training? Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Guha Thakurta, Lun Wang
NeurIPSW 2021 A Novel Self-Distillation Architecture to Defeat Membership Inference Attacks Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal