Raghunathan, Aditi

64 publications

ICLRW 2025 Assessing Diversity Collapse in Reasoning Xingyu Dang, Christina Baek, J Zico Kolter, Aditi Raghunathan
ICLRW 2025 Context-Parametric Inversion: Why Instruction Finetuning Can Worsen Context Reliance Sachin Goyal, Christina Baek, J Zico Kolter, Aditi Raghunathan
ICLR 2025 Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance Sachin Goyal, Christina Baek, J Zico Kolter, Aditi Raghunathan
ICLRW 2025 Disentangling Sequence Memorization and General Capability in Large Language Models Gaurav Rohit Ghosal, Pratyush Maini, Aditi Raghunathan
ICLR 2025 Dissecting Adversarial Robustness of Multimodal LM Agents Chen Henry Wu, Rishi Rajesh Shah, Jing Yu Koh, Russ Salakhutdinov, Daniel Fried, Aditi Raghunathan
ICLRW 2025 Exact Unlearning of Finetuning Data via Model Merging at Scale Kevin Kuo, Amrith Setlur, Kartik Srinivas, Aditi Raghunathan, Virginia Smith
ICML 2025 Memorization Sinks: Isolating Memorization During LLM Training Gaurav Rohit Ghosal, Pratyush Maini, Aditi Raghunathan
ICLRW 2025 Multi-Token Prediction Boosts Creativity in Algorithmic Tasks Vaishnavh Nagarajan, Chen Henry Wu, Charles Ding, Aditi Raghunathan
ICML 2025 Overtrained Language Models Are Harder to Fine-Tune Jacob Mitchell Springer, Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, Aditi Raghunathan
ICLRW 2025 Overtrained Language Models Are Harder to Fine-Tune Jacob Mitchell Springer, Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, Aditi Raghunathan
NeurIPS 2025 Reasoning as an Adaptive Defense for Safety Taeyoun Kim, Fahim Tajwar, Aditi Raghunathan, Aviral Kumar
ICLR 2025 Repetition Improves Language Model Embeddings Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi Raghunathan
ICML 2025 Roll the Dice & Look Before You Leap: Going Beyond the Creative Limits of Next-Token Prediction Vaishnavh Nagarajan, Chen Henry Wu, Charles Ding, Aditi Raghunathan
ICLR 2025 Scaling Laws for Precision Tanishq Kumar, Zachary Ankner, Benjamin Frederick Spector, Blake Bordelon, Niklas Muennighoff, Mansheej Paul, Cengiz Pehlevan, Christopher Re, Aditi Raghunathan
ICLRW 2025 Self-Correction for OOD Generalization Vanya Bannihatti Kumar, Abhinav Sukumar Rao, Aditi Raghunathan
AISTATS 2025 Theory of Agreement-on-the-Line in Linear Models and Gaussian Data Christina Baek, Aditi Raghunathan, J Zico Kolter
ICLRW 2025 Why Foundation Models Struggle with Cross-Modal Context Chen Henry Wu, Neil Kale, Aditi Raghunathan
NeurIPSW 2024 Dissecting Adversarial Robustness of Multimodal LM Agents Chen Henry Wu, Rishi Rajesh Shah, Jing Yu Koh, Russ Salakhutdinov, Daniel Fried, Aditi Raghunathan
TMLR 2024 Multitask Learning Can Improve Worst-Group Outcomes Atharva Kulkarni, Lucio M. Dery, Amrith Setlur, Aditi Raghunathan, Ameet Talwalkar, Graham Neubig
NeurIPS 2024 Predicting the Performance of Foundation Models via Agreement-on-the-Line Rahul Saxena, Taeyoun Kim, Aman Mehra, Christina Baek, Zico Kolter, Aditi Raghunathan
CVPR 2024 Scaling Laws for Data Filtering-- Data Curation Cannot Be Compute Agnostic Sachin Goyal, Pratyush Maini, Zachary C. Lipton, Aditi Raghunathan, J. Zico Kolter
ICLR 2024 Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning Jacob Mitchell Springer, Vaishnavh Nagarajan, Aditi Raghunathan
ICLR 2024 T-MARS: Improving Visual Representations by Circumventing Text Feature Learning Pratyush Maini, Sachin Goyal, Zachary Chase Lipton, J Zico Kolter, Aditi Raghunathan
NeurIPS 2024 Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line Eungyeup Kim, Mingjie Sun, Christina Baek, Aditi Raghunathan, J. Zico Kolter
NeurIPSW 2024 Testing the Limits of Jailbreaking Defenses with the Purple Problem Taeyoun Kim, Suhas Kotha, Aditi Raghunathan
ICLRW 2024 The Science of Data Filtering: Data Curation Cannot Be Compute Agnostic Sachin Goyal, Pratyush Maini, Zachary Chase Lipton, Aditi Raghunathan, J Zico Kolter
ICLR 2024 Understanding Catastrophic Forgetting in Language Models via Implicit Inference Suhas Kotha, Jacob Mitchell Springer, Aditi Raghunathan
ICML 2024 Understanding Finetuning for Factual Knowledge Extraction Gaurav Rohit Ghosal, Tatsunori Hashimoto, Aditi Raghunathan
ICLR 2024 Why Is SAM Robust to Label Noise? Christina Baek, J Zico Kolter, Aditi Raghunathan
NeurIPSW 2023 AutoFT: Robust Fine-Tuning by Optimizing Hyperparameters on OOD Data Caroline Choi, Yoonho Lee, Annie S Chen, Allan Zhou, Aditi Raghunathan, Chelsea Finn
ICML 2023 Automatically Auditing Large Language Models via Discrete Optimization Erik Jones, Anca Dragan, Aditi Raghunathan, Jacob Steinhardt
ICLR 2023 Bitrate-Constrained DRO: Beyond Worst Case Robustness to Unknown Group Shifts Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine
NeurIPS 2023 Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift Saurabh Garg, Amrith Setlur, Zachary Lipton, Sivaraman Balakrishnan, Virginia Smith, Aditi Raghunathan
ICML 2023 Contextual Reliability: When Different Features Matter in Different Contexts Gaurav Rohit Ghosal, Amrith Setlur, Daniel S. Brown, Anca Dragan, Aditi Raghunathan
CVPR 2023 Finetune like You Pretrain: Improved Finetuning of Zero-Shot Vision Models Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, Aditi Raghunathan
NeurIPSW 2023 Predicting the Performance of Foundation Models via Agreement-on-the-Line Aman Mehra, Rahul Saxena, Taeyoun Kim, Christina Baek, J Zico Kolter, Aditi Raghunathan
NeurIPSW 2023 Predicting the Performance of Foundation Models via Agreement-on-the-Line Rahul Saxena, Aman Mehra, Taeyoun Kim, Christina Baek, J Zico Kolter, Aditi Raghunathan
NeurIPSW 2023 Reliable Test-Time Adaptation via Agreement-on-the-Line Eungyeup Kim, Mingjie Sun, Aditi Raghunathan, J Zico Kolter
NeurIPSW 2023 Understanding Catastrophic Forgetting in Language Models via Implicit Inference Suhas Kotha, Jacob Springer, Aditi Raghunathan
ICLR 2023 Using Language to Extend to Unseen Domains Lisa Dunlap, Clara Mohri, Devin Guillory, Han Zhang, Trevor Darrell, Joseph E. Gonzalez, Aditi Raghunathan, Anna Rohrbach
NeurIPS 2022 Agreement-on-the-Line: Predicting the Performance of Neural Networks Under Distribution Shift Christina Baek, Yiding Jiang, Aditi Raghunathan, J. Zico Kolter
ICLR 2022 An Explanation of In-Context Learning as Implicit Bayesian Inference Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma
NeurIPSW 2022 Bitrate-Constrained DRO: Beyond Worst Case Robustness to Unknown Group Shifts Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine
UAI 2022 Calibrated Ensembles Can Mitigate Accuracy Tradeoffs Under Distribution Shift Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan
ICLR 2022 Fine-Tuning Can Distort Pretrained Features and Underperform Out-of-Distribution Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, Percy Liang
CoRL 2022 Learning Representations That Enable Generalization in Assistive Tasks Jerry Zhi-Yang He, Zackory Erickson, Daniel S. Brown, Aditi Raghunathan, Anca Dragan
NeurIPS 2022 Test Time Adaptation via Conjugate Pseudo-Labels Sachin Goyal, Mingjie Sun, Aditi Raghunathan, J. Zico Kolter
ICML 2021 Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt
NeurIPSW 2021 Calibrated Ensembles: A Simple Way to Mitigate ID-OOD Accuracy Tradeoffs Ananya Kumar, Aditi Raghunathan, Tengyu Ma, Percy Liang
ICML 2021 Decoupling Exploration and Exploitation for Meta-Reinforcement Learning Without Sacrifices Evan Z Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn
ICML 2021 Just Train Twice: Improving Group Robustness Without Training Group Information Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn
ICML 2020 An Investigation of Why Overparameterization Exacerbates Spurious Correlations Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, Percy Liang
ICML 2020 DROCC: Deep Robust One-Class Classification Sachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, Prateek Jain
NeurIPS 2020 Enabling Certification of Verification-Agnostic Networks via Memory-Efficient Semidefinite Programming Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy R Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli
ICMLW 2020 Explore Then Execute: Adapting Without Rewards via Factorized Meta-Reinforcement Learning Evan Zheran Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn
NeurIPS 2020 The Pitfalls of Simplicity Bias in Neural Networks Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, Praneeth Netrapalli
ICML 2020 Understanding and Mitigating the Tradeoff Between Robustness and Accuracy Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang
ICMLW 2019 Adversarial Training Can Hurt Generalization Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang
NeurIPS 2019 Unlabeled Data Improves Adversarial Robustness Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, Percy Liang
ICLR 2018 Certified Defenses Against Adversarial Examples Aditi Raghunathan, Jacob Steinhardt, Percy Liang
NeurIPS 2018 Semidefinite Relaxations for Certifying Robustness to Adversarial Examples Aditi Raghunathan, Jacob Steinhardt, Percy Liang
ICML 2017 Estimating the Unseen from Multiple Populations Aditi Raghunathan, Gregory Valiant, James Zou
NeurIPS 2017 Learning Mixture of Gaussians with Streaming Data Aditi Raghunathan, Prateek Jain, Ravishankar Krishnawamy
ICML 2016 Estimation from Indirect Supervision with Linear Moments Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang