Che, Zora

10 publications

ICLRW 2025 AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security Zikui Cai, Shayan Shabihi, Bang An, Zora Che, Brian R. Bartoldson, Bhavya Kailkhura, Tom Goldstein, Furong Huang
AAAI 2025 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
TMLR 2025 Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities Zora Che, Stephen Casper, Robert Kirk, Anirudh Satheesh, Stewart Slocum, Lev E McKinney, Rohit Gandikota, Aidan Ewart, Domenic Rosati, Zichu Wu, Zikui Cai, Bilal Chughtai, Yarin Gal, Furong Huang, Dylan Hadfield-Menell
ICMLW 2024 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
NeurIPSW 2024 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
NeurIPSW 2024 EnsemW2S: Can an Ensemble of LLMs Be Leveraged to Obtain a Stronger LLM? Aakriti Agrawal, Mucong Ding, Zora Che, Chenghao Deng, Anirudh Satheesh, John Langford, Furong Huang
NeurIPSW 2024 Model Manipulation Attacks Enable More Rigorous Evaluations of LLM Capabilities Zora Che, Stephen Casper, Anirudh Satheesh, Rohit Gandikota, Domenic Rosati, Stewart Slocum, Lev E McKinney, Zichu Wu, Zikui Cai, Bilal Chughtai, Daniel Filan, Furong Huang, Dylan Hadfield-Menell
NeurIPSW 2024 PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models Michael-Andrei Panaitescu-Liess, Pankayaraj Pathmanathan, Yigitcan Kaya, Zora Che, Bang An, Sicheng Zhu, Aakriti Agrawal, Furong Huang
ICMLW 2024 SAIL: Self-Improving Efficient Online Alignment of Large Language Models Mucong Ding, Souradip Chakraborty, Vibhu Agrawal, Zora Che, Alec Koppel, Mengdi Wang, Amrit Bedi, Furong Huang
NeurIPS 2022 Transferring Fairness Under Distribution Shifts via Fair Consistency Regularization Bang An, Zora Che, Mucong Ding, Furong Huang