Jiang, Liwei

21 publications

CPAL 2025 A Validation Approach to Over-Parameterized Matrix and Image Recovery Lijun Ding, Zhen Qin, Liwei Jiang, Jinxin Zhou, Zhihui Zhu
NeurIPS 2025 AI Debate Aids Assessment of Controversial Claims Salman Rahman, Sheriff Issaka, Ashima Suvarna, Genglin Liu, James Shiffer, Jaeyoung Lee, Md Rizwan Parvez, Hamid Palangi, Shi Feng, Nanyun Peng, Yejin Choi, Julian Michael, Liwei Jiang, Saadia Gabriel
ICLR 2025 AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text Against Web Text Ximing Lu, Melanie Sclar, Skyler Hallinan, Niloofar Mireshghallah, Jiacheng Liu, Seungju Han, Allyson Ettinger, Liwei Jiang, Khyathi Chandu, Nouha Dziri, Yejin Choi
NeurIPS 2025 Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond) Liwei Jiang, Yuanjun Chai, Margaret Li, Mickel Liu, Raymond Fok, Nouha Dziri, Yulia Tsvetkov, Maarten Sap, Yejin Choi
ICLR 2025 DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life Yu Ying Chiu, Liwei Jiang, Yejin Choi
COLT 2025 Online Covariance Estimation in Nonsmooth Stochastic Approximation Liwei Jiang, Abhishek Roy, Krishnakumar Balasubramanian, Damek Davis, Dmitriy Drusvyatskiy, Sen Na
ICML 2025 Position: Political Neutrality in AI Is Impossible — But Here Is How to Approximate It Jillian Fisher, Ruth Elisabeth Appel, Chan Young Park, Yujin Potter, Liwei Jiang, Taylor Sorensen, Shangbin Feng, Yulia Tsvetkov, Margaret Roberts, Jennifer Pan, Dawn Song, Yejin Choi
ICML 2025 SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior Jing-Jing Li, Valentina Pyatkin, Max Kleiman-Weiner, Liwei Jiang, Nouha Dziri, Anne Collins, Jana Schaich Borg, Maarten Sap, Yejin Choi, Sydney Levine
AAAI 2025 To Err Is AI: A Case Study Informing LLM Flaw Reporting Practices Sean McGregor, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, William H. Smith, Shayne Longpre, Avijit Ghosh, Christopher Fiorelli, Michelle Hoang, Sven Cattell, Nouha Dziri
NeurIPSW 2024 Can Language Models Reason About Individualistic Human Values and Preferences? Liwei Jiang, Sydney Levine, Yejin Choi
ICLR 2024 Phenomenal yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren
ICML 2024 Position: A Roadmap to Pluralistic Alignment Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi
NeurIPSW 2024 SafetyAnalyst: Interpretable, Transparent, and Steerable LLM Safety Moderation Jing-Jing Li, Valentina Pyatkin, Max Kleiman-Weiner, Liwei Jiang, Nouha Dziri, Anne Collins, Jana Schaich Borg, Maarten Sap, Yejin Choi, Sydney Levine
ICLR 2024 The Generative AI Paradox: “What It Can Create, It May Not Understand” Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi
AAAI 2024 Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties Taylor Sorensen, Liwei Jiang, Jena D. Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi
NeurIPS 2024 WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri
NeurIPS 2024 WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models Liwei Jiang, Kavel Rao, Seungju Han, Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, Nouha Dziri
ICMLW 2024 WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models Liwei Jiang, Kavel Rao, Seungju Han, Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Nouha Dziri, Yejin Choi
NeurIPS 2023 Faith and Fate: Limits of Transformers on Compositionality Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena Hwang, Soumya Sanyal, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi
NeurIPS 2022 QUARK: Controllable Text Generation with Reinforced Unlearning Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, Yejin Choi
NeurIPS 2021 Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu