Tramèr, Florian

56 publications

ICLR 2025 Adversarial Perturbations Cannot Reliably Protect Artists from Generative AI Robert Hönig, Javier Rando, Nicholas Carlini, Florian Tramèr
ICLR 2025 Adversarial Search Engine Optimization for Large Language Models Fredrik Nestaas, Edoardo Debenedetti, Florian Tramèr
TMLR 2025 An Adversarial Perspective on Machine Unlearning for AI Safety Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando
ICML 2025 AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses Nicholas Carlini, Edoardo Debenedetti, Javier Rando, Milad Nasr, Florian Tramèr
ICLRW 2025 Blind Baselines Beat Membership Inference Attacks for Foundation Models Debeshee Das, Jie Zhang, Florian Tramèr
ICLR 2025 Consistency Checks for Language Model Forecasters Daniel Paleka, Abhimanyu Pallavi Sudhir, Alejandro Alvarez, Vineeth Bhat, Adam Shen, Evan Wang, Florian Tramèr
ICML 2025 Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards Yangsibo Huang, Milad Nasr, Anastasios Nikolas Angelopoulos, Nicholas Carlini, Wei-Lin Chiang, Christopher A. Choquette-Choo, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Ken Liu, Ion Stoica, Florian Tramèr, Chiyuan Zhang
ICLR 2025 Measuring Non-Adversarial Reproduction of Training Data in Large Language Models Michael Aerni, Javier Rando, Edoardo Debenedetti, Nicholas Carlini, Daphne Ippolito, Florian Tramèr
ICLR 2025 Persistent Pre-Training Poisoning of LLMs Yiming Zhang, Javier Rando, Ivan Evtimov, Jianfeng Chi, Eric Michael Smith, Nicholas Carlini, Florian Tramèr, Daphne Ippolito
NeurIPS 2025 RealMath: A Continuous Benchmark for Evaluating Language Models on Research-Level Mathematics Jie Zhang, Cezara Petrui, Kristina Nikolić, Florian Tramèr
TMLR 2025 Reliable and Responsible Foundation Models Xinyu Yang, Junlin Han, Rishi Bommasani, Jinqi Luo, Wenjie Qu, Wangchunshu Zhou, Adel Bibi, Xiyao Wang, Jaehong Yoon, Elias Stengel-Eskin, Shengbang Tong, Lingfeng Shen, Rafael Rafailov, Runjia Li, Zhaoyang Wang, Yiyang Zhou, Chenhang Cui, Yu Wang, Wenhao Zheng, Huichi Zhou, Jindong Gu, Zhaorun Chen, Peng Xia, Tony Lee, Thomas P Zollo, Vikash Sehwag, Jixuan Leng, Jiuhai Chen, Yuxin Wen, Huan Zhang, Zhun Deng, Linjun Zhang, Pavel Izmailov, Pang Wei Koh, Yulia Tsvetkov, Andrew Gordon Wilson, Jiaheng Zhang, James Zou, Cihang Xie, Hao Wang, Philip Torr, Julian McAuley, David Alvarez-Melis, Florian Tramèr, Kaidi Xu, Suman Jana, Chris Callison-Burch, Rene Vidal, Filippos Kokkinos, Mohit Bansal, Beidi Chen, Huaxiu Yao
ICLR 2025 Scalable Extraction of Training Data from Aligned, Production Language Models Milad Nasr, Javier Rando, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Florian Tramèr, Katherine Lee
ICML 2025 The Jailbreak Tax: How Useful Are Your Jailbreak Outputs? Kristina Nikolić, Luze Sun, Jie Zhang, Florian Tramèr
ICLRW 2025 The Jailbreak Tax: How Useful Are Your Jailbreak Outputs? Kristina Nikolić, Luze Sun, Jie Zhang, Florian Tramèr
NeurIPS 2024 AgentDojo: A Dynamic Environment to Evaluate Prompt Injection Attacks and Defenses for LLM Agents Edoardo Debenedetti, Jie Zhang, Mislav Balunovic, Luca Beurer-Kellner, Marc Fischer, Florian Tramèr
NeurIPSW 2024 An Adversarial Perspective on Machine Unlearning for AI Safety Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando
NeurIPSW 2024 An Adversarial Perspective on Machine Unlearning for AI Safety Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando
NeurIPS 2024 Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition Edoardo Debenedetti, Javier Rando, Daniel Paleka, Fineas Silaghi, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr
ICML 2024 Extracting Training Data from Document-Based VQA Models Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari
ICMLW 2024 Extracting Training Data from Document-Based VQA Models Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari
TMLR 2024 Foundational Challenges in Assuring Alignment and Safety of Large Language Models Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, Jose Hernandez-Orallo, Lewis Hammond, Eric J Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Chenyu Zhang, Ruiqi Zhong, Sean O hEigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Aleksandar Petrov, Christian Schroeder de Witt, Sumeet Ramesh Motwani, Yoshua Bengio, Danqi Chen, Philip Torr, Samuel Albanie, Tegan Maharaj, Jakob Nicolaus Foerster, Florian Tramèr, He He, Atoosa Kasirzadeh, Yejin Choi, David Krueger
NeurIPS 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
ICMLW 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
ICML 2024 Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining Florian Tramèr, Gautam Kamath, Nicholas Carlini
ICML 2024 Privacy Backdoors: Stealing Data with Corrupted Pretrained Models Shanglun Feng, Florian Tramèr
NeurIPS 2024 Query-Based Adversarial Prompt Generation Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, Milad Nasr
ICML 2024 Stealing Part of a Production Language Model Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
ICLR 2024 Universal Jailbreak Backdoors from Poisoned Human Feedback Javier Rando, Florian Tramèr
ICLR 2023 (Certified!!) Adversarial Robustness for Free! Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, J Zico Kolter
NeurIPS 2023 Are Aligned Neural Networks Adversarially Aligned? Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei W Koh, Daphne Ippolito, Florian Tramer, Ludwig Schmidt
ICMLW 2023 Backdoor Attacks for In-Context Learning with Language Models Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini
NeurIPS 2023 Counterfactual Memorization in Neural Language Models Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramer, Nicholas Carlini
ICMLW 2023 Evading Black-Box Classifiers Without Breaking Eggs Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr
NeurIPSW 2023 Evaluating Superhuman Models with Consistency Checks Lukas Fluri, Daniel Paleka, Florian Tramèr
ICLR 2023 Measuring Forgetting of Memorized Training Examples Matthew Jagielski, Om Thakkar, Florian Tramer, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, Chiyuan Zhang
ICML 2023 Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems Chawin Sitawarin, Florian Tramèr, Nicholas Carlini
ICLR 2023 Quantifying Memorization Across Neural Language Models Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, Chiyuan Zhang
NeurIPS 2023 Students Parrot Their Teachers: Membership Inference on Model Distillation Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramer
ICLR 2022 Data Poisoning Won’t Save You from Facial Recognition Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramer
ICML 2022 Detecting Adversarial Examples Is (Nearly) as Hard as Classifying Them Florian Tramer
NeurIPS 2022 Increasing Confidence in Adversarial Robustness Evaluations Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini
ICLR 2022 Large Language Models Can Be Strong Differentially Private Learners Xuechen Li, Florian Tramer, Percy Liang, Tatsunori Hashimoto
NeurIPSW 2022 Red-Teaming the Stable Diffusion Safety Filter Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, Florian Tramer
NeurIPS 2022 The Privacy Onion Effect: Memorization Is Relative Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramer
FnTML 2021 Advances and Open Problems in Federated Learning Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
NeurIPS 2021 Antipodes of Label Differential Privacy: PATE and ALIBI Mani Malek Esmaeili, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramer
ICMLW 2021 Data Poisoning Won’t Save You from Facial Recognition Evani Radiya-Dixit, Florian Tramer
ICMLW 2021 Detecting Adversarial Examples Is (Nearly) as Hard as Classifying Them Florian Tramer
ICLR 2021 Differentially Private Learning Needs Better Features (or Much More Data) Florian Tramer, Dan Boneh
ICML 2021 Label-Only Membership Inference Attacks Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot
NeurIPSW 2021 Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing Xuechen Li, Florian Tramer, Percy Liang, Tatsunori Hashimoto
ICML 2020 Fundamental Tradeoffs Between Invariance and Sensitivity to Adversarial Perturbations Florian Tramer, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Joern-Henrik Jacobsen
NeurIPS 2020 On Adaptive Attacks to Adversarial Example Defenses Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry
NeurIPS 2019 Adversarial Training and Robustness for Multiple Perturbations Florian Tramer, Dan Boneh
ICLR 2019 Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware Florian Tramer, Dan Boneh
ICLR 2018 Ensemble Adversarial Training: Attacks and Defenses Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel