Steinhardt, Jacob

79 publications

ICML 2025 Adversaries Can Misuse Combinations of Safe Models Erik Jones, Anca Dragan, Jacob Steinhardt
ICLRW 2025 Diagnostic Uncertainty: Teaching Language Models to Describe Open-Ended Uncertainty Brian Sui, Jessy Lin, Michelle Li, Anca Dragan, Dan Klein, Jacob Steinhardt
ICML 2025 Eliciting Language Model Behaviors with Investigator Agents Xiang Lisa Li, Neil Chowdhury, Daniel D. Johnson, Tatsunori Hashimoto, Percy Liang, Sarah Schwettmann, Jacob Steinhardt
NeurIPS 2025 Establishing Best Practices in Building Rigorous Agentic Benchmarks Yuxuan Zhu, Tengjun Jin, Yada Pruksachatkun, Andy K Zhang, Shu Liu, Sasha Cui, Sayash Kapoor, Shayne Longpre, Kevin Meng, Rebecca Weiss, Fazl Barez, Rahul Gupta, Jwala Dhamala, Jacob Merizian, Mario Giulianelli, Harry Coppock, Cozmin Ududec, Antony Kellermann, Jasjeet S Sekhon, Jacob Steinhardt, Sarah Schwettmann, Arvind Narayanan, Matei Zaharia, Ion Stoica, Percy Liang, Daniel Kang
ICML 2025 Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts Jiahai Feng, Stuart Russell, Jacob Steinhardt
ICLR 2025 Interpreting the Second-Order Effects of Neurons in CLIP Yossi Gandelsman, Alexei A Efros, Jacob Steinhardt
ICLR 2025 Iterative Label Refinement Matters More than Preference Optimization Under Weak Supervision Yaowen Ye, Cassidy Laidlaw, Jacob Steinhardt
NeurIPS 2025 LLM Layers Immediately Correct Each Other Arjun Patrawala, Jiahai Feng, Erik Jones, Jacob Steinhardt
ICLR 2025 Language Models Learn to Mislead Humans via RLHF Jiaxin Wen, Ruiqi Zhong, Akbir Khan, Ethan Perez, Jacob Steinhardt, Minlie Huang, Samuel R. Bowman, He He, Shi Feng
ICLR 2025 Monitoring Latent World States in Language Models with Propositional Probes Jiahai Feng, Stuart Russell, Jacob Steinhardt
ICLR 2025 Uncovering Gaps in How Humans and LLMs Interpret Subjective Language Erik Jones, Arjun Patrawala, Jacob Steinhardt
ICLR 2025 VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models Lisa Dunlap, Krishna Mandal, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez
ICML 2025 What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning? Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, Aviral Kumar
ICML 2025 Which Attention Heads Matter for In-Context Learning? Kayo Yin, Jacob Steinhardt
ICMLW 2024 AdaptiveBackdoor: Backdoored Language Model Agents That Detect Human Overseers Heng Wang, Ruiqi Zhong, Jiaxin Wen, Jacob Steinhardt
ICMLW 2024 AdaptiveBackdoor: Backdoored Language Model Agents That Detect Human Overseers Heng Wang, Ruiqi Zhong, Jiaxin Wen, Jacob Steinhardt
NeurIPS 2024 Approaching Human-Level Forecasting with Language Models Danny Halawi, Fred Zhang, Chen Yueh-Han, Jacob Steinhardt
ICML 2024 Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation Danny Halawi, Alexander Wei, Eric Wallace, Tony Tong Wang, Nika Haghtalab, Jacob Steinhardt
CVPR 2024 Describing Differences in Image Sets with Natural Language Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy
ICML 2024 Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen Mckeown
NeurIPS 2024 Explaining Datasets in Words: Statistical Models with Natural Language Parameters Ruiqi Zhong, Heng Wang, Dan Klein, Jacob Steinhardt
ICML 2024 Feedback Loops with Language Models Drive In-Context Reward Hacking Alexander Pan, Erik Jones, Meena Jagadeesan, Jacob Steinhardt
ICLR 2024 How Do Language Models Bind Entities in Context? Jiahai Feng, Jacob Steinhardt
ICLR 2024 Interpreting CLIP's Image Representation via Text-Based Decomposition Yossi Gandelsman, Alexei A Efros, Jacob Steinhardt
ICLR 2024 Overthinking the Truth: Understanding How Language Models Process False Demonstrations Danny Halawi, Jean-Stanislas Denain, Jacob Steinhardt
ICLRW 2024 Protein Language Models Are Biased by Unequal Sequence Sampling Across the Tree of Life Frances Ding, Jacob Steinhardt
ICML 2023 Are Neurons Actually Collapsed? on the Fine-Grained Structure in Neural Representations Yongyi Yang, Jacob Steinhardt, Wei Hu
ICML 2023 Automatically Auditing Large Language Models via Discrete Optimization Erik Jones, Anca Dragan, Aditi Raghunathan, Jacob Steinhardt
ICLR 2023 Discovering Latent Knowledge in Language Models Without Supervision Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt
NeurIPS 2023 Goal Driven Discovery of Distributional Differences via Language Descriptions Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, Jacob Steinhardt
NeurIPSW 2023 How Do Language Models Bind Entities in Context? Jiahai Feng, Jacob Steinhardt
NeurIPS 2023 Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt, Nika Haghtalab
ICLR 2023 Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, Jacob Steinhardt
NeurIPS 2023 Jailbroken: How Does LLM Safety Training Fail? Alexander Wei, Nika Haghtalab, Jacob Steinhardt
NeurIPS 2023 Mass-Producing Failures of Multimodal Systems with Language Models Shengbang Tong, Erik Jones, Jacob Steinhardt
ICLR 2023 Progress Measures for Grokking via Mechanistic Interpretability Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt
AISTATS 2023 Reward Learning as Doubly Nonparametric Bandits: Optimal Design and Scaling Laws Kush Bhatia, Wenshuo Guo, Jacob Steinhardt
NeurIPS 2023 Supply-Side Equilibria in Recommender Systems Meena Jagadeesan, Nikhil Garg, Jacob Steinhardt
CVPRW 2022 A3D: Studying Pretrained Representations with Programmable Datasets Ye Wang, Norman Mu, Daniele Grandi, Nicolas Savva, Jacob Steinhardt
NeurIPSW 2022 Are Neurons Actually Collapsed? on the Fine-Grained Structure in Neural Representations Yongyi Yang, Jacob Steinhardt, Wei Hu
NeurIPS 2022 Capturing Failures of Large Language Models via Human Cognitive Biases Erik Jones, Jacob Steinhardt
ICML 2022 Describing Differences Between Text Distributions with Natural Language Ruiqi Zhong, Charlie Snell, Dan Klein, Jacob Steinhardt
NeurIPS 2022 Forecasting Future World Events with Neural Networks Andy Zou, Tristan Xiao, Ryan Jia, Joe Kwon, Mantas Mazeika, Richard Li, Dawn Song, Jacob Steinhardt, Owain Evans, Dan Hendrycks
NeurIPS 2022 How Would the Viewer Feel? Estimating Wellbeing from Video Scenarios Mantas Mazeika, Eric Tang, Andy Zou, Steven Basart, Jun Shern Chan, Dawn Song, David A. Forsyth, Jacob Steinhardt, Dan Hendrycks
NeurIPSW 2022 Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, Jacob Steinhardt
ICML 2022 More than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize Alexander Wei, Wei Hu, Jacob Steinhardt
CVPR 2022 PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt
ICML 2022 Predicting Out-of-Distribution Error with the Projection Norm Yaodong Yu, Zitong Yang, Alexander Wei, Yi Ma, Jacob Steinhardt
ICML 2022 Scaling Out-of-Distribution Detection for Real-World Settings Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joseph Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, Dawn Song
MLJ 2022 Stronger Data Poisoning Attacks Break Data Sanitization Defenses Pang Wei Koh, Jacob Steinhardt, Percy Liang
ICLR 2022 The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models Alexander Pan, Kush Bhatia, Jacob Steinhardt
ICLR 2021 Aligning AI with Shared Human Values Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, Jacob Steinhardt
NeurIPS 2021 Grounding Representation Similarity Through Statistical Testing Frances Ding, Jean-Stanislas Denain, Jacob Steinhardt
NeurIPS 2021 Learning Equilibria in Matching Markets from Bandit Feedback Meena Jagadeesan, Alexander Wei, Yixin Wang, Michael I. Jordan, Jacob Steinhardt
CVPR 2021 Limitations of Post-Hoc Feature Alignment for Robustness Collin Burns, Jacob Steinhardt
ICLR 2021 Measuring Massive Multitask Language Understanding Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt
CVPR 2021 Natural Adversarial Examples Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song
NeurIPSW 2021 PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Dawn Song, Jacob Steinhardt
NeurIPSW 2021 The Effect of Model Size on Worst-Group Generalization Alan Le Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt
NeurIPSW 2021 The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models Alexander Pan, Kush Bhatia, Jacob Steinhardt
ICCV 2021 The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, Justin Gilmer
NeurIPS 2020 Enabling Certification of Verification-Agnostic Networks via Memory-Efficient Semidefinite Programming Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy R Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli
ICML 2020 Identifying Statistical Bias in Dataset Replication Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
ICML 2020 Rethinking Bias-Variance Trade-Off for Generalization of Neural Networks Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, Yi Ma
ICML 2019 Sever: A Robust Meta-Algorithm for Stochastic Optimization Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart
ICLR 2018 Certified Defenses Against Adversarial Examples Aditi Raghunathan, Jacob Steinhardt, Percy Liang
NeurIPS 2018 Semidefinite Relaxations for Certifying Robustness to Adversarial Examples Aditi Raghunathan, Jacob Steinhardt, Percy Liang
NeurIPS 2017 Certified Defenses for Data Poisoning Attacks Jacob Steinhardt, Pang Wei W Koh, Percy Liang
NeurIPS 2016 Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction Jacob Steinhardt, Gregory Valiant, Moses Charikar
COLT 2016 Memory, Communication, and Statistical Queries Jacob Steinhardt, Gregory Valiant, Stefan Wager
NeurIPS 2016 Unsupervised Risk Estimation Using Only Conditional Independence Structure Jacob Steinhardt, Percy Liang
ICML 2015 Learning Fast-Mixing Models for Structured Prediction Jacob Steinhardt, Percy Liang
AISTATS 2015 Learning Where to Sample in Structured Prediction Tianlin Shi, Jacob Steinhardt, Percy Liang
NeurIPS 2015 Learning with Relaxed Supervision Jacob Steinhardt, Percy Liang
COLT 2015 Minimax Rates for Memory-Bounded Sparse Linear Regression Jacob Steinhardt, John C. Duchi
ICML 2015 Reified Context Models Jacob Steinhardt, Percy Liang
ICML 2014 Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm Jacob Steinhardt, Percy Liang
ICML 2014 Filtering with Abstract Particles Jacob Steinhardt, Percy Liang
AISTATS 2012 Flexible Martingale Priors for Deep Hierarchies Jacob Steinhardt, Zoubin Ghahramani