Shwartz-Ziv, Ravid

25 publications

AISTATS 2025 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
ICML 2025 Layer by Layer: Uncovering Hidden Representations in Language Models Oscar Skean, Md Rifat Arefin, Dan Zhao, Niket Nikul Patel, Jalal Naghiyev, Yann Lecun, Ravid Shwartz-Ziv
ICLR 2025 LiveBench: A Challenging, Contamination-Limited LLM Benchmark Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Benjamin Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Sreemanti Dey, Shubh-Agrawal, Sandeep Singh Sandha, Siddartha Venkat Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, Micah Goldblum
CVPR 2025 Rate-in: Information-Driven Adaptive Dropout Rates for Improved Inference-Time Uncertainty Estimation Tal Zeevi, Ravid Shwartz-Ziv, Yann LeCun, Lawrence H. Staib, John A. Onofrey
ICLR 2025 Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning Md Rifat Arefin, Gopeshh Subbaraj, Nicolas Gontier, Yann LeCun, Irina Rish, Ravid Shwartz-Ziv, Christopher Pal
ICLR 2025 Turning up the Heat: Min-P Sampling for Creative and Coherent LLM Outputs Nguyen Nhat Minh, Andrew Baker, Clement Neo, Allen G Roush, Andreas Kirsch, Ravid Shwartz-Ziv
NeurIPSW 2024 Does Representation Matter? Exploring Intermediate Layers in Large Language Models Oscar Skean, Md Rifat Arefin, Ravid Shwartz-Ziv
ICMLW 2024 Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, Andrew Gordon Wilson
NeurIPSW 2024 Learning to Compress: Local Rank and Information Compression in Deep Neural Networks Niket Nikul Patel, Ravid Shwartz-Ziv
ICLR 2024 Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs Angelica Chen, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L Leavitt, Naomi Saphra
ICML 2024 The Entropy Enigma: Success and Failure of Entropy Minimization Ori Press, Ravid Shwartz-Ziv, Yann Lecun, Matthias Bethge
ICLRW 2024 Towards an Improved Understanding and Utilization of Maximum Manifold Capacity Representations Rylan Schaeffer, Berivan Isik, Dhruv Bhandarkar Pai, Andres Carranza, Victor Lecomte, Alyssa Unell, Mikail Khona, Thomas Edward Yerxa, Yann LeCun, SueYeon Chung, Andrey Gromov, Ravid Shwartz-Ziv, Sanmi Koyejo
NeurIPS 2023 An Information Theory Perspective on Variance-Invariance-Covariance Regularization Ravid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, Tim G. J. Rudner, Yann LeCun
NeurIPSW 2023 An Information-Theoretic Understanding of Maximum Manifold Capacity Representations Rylan Schaeffer, Berivan Isik, Victor Lecomte, Mikail Khona, Yann LeCun, Andrey Gromov, Ravid Shwartz-Ziv, Sanmi Koyejo
NeurIPSW 2023 An Information-Theoretic Understanding of Maximum Manifold Capacity Representations Victor Lecomte, Rylan Schaeffer, Berivan Isik, Mikail Khona, Yann LeCun, Sanmi Koyejo, Andrey Gromov, Ravid Shwartz-Ziv
NeurIPSW 2023 An Information-Theoretic Understanding of Maximum Manifold Capacity Representations Berivan Isik, Victor Lecomte, Rylan Schaeffer, Yann LeCun, Mikail Khona, Ravid Shwartz-Ziv, Sanmi Koyejo, Andrey Gromov
ICLR 2023 How Much Data Are Augmentations Worth? an Investigation into Scaling Laws, Invariance, and Implicit Regularization Jonas Geiping, Micah Goldblum, Gowthami Somepalli, Ravid Shwartz-Ziv, Tom Goldstein, Andrew Gordon Wilson
NeurIPS 2023 Reverse Engineering Self-Supervised Learning Ido Ben-Shaul, Ravid Shwartz-Ziv, Tomer Galanti, Shai Dekel, Yann LeCun
NeurIPS 2023 Simplifying Neural Network Training Under Class Imbalance Ravid Shwartz-Ziv, Micah Goldblum, Yucen Li, C. Bayan Bruss, Andrew G Wilson
ICMLW 2022 How Much Data Is Augmentation Worth? Jonas Geiping, Gowthami Somepalli, Ravid Shwartz-Ziv, Andrew Gordon Wilson, Tom Goldstein, Micah Goldblum
NeurIPSW 2022 On Representation Learning Under Class Imbalance Ravid Shwartz-Ziv, Micah Goldblum, Yucen Lily Li, C. Bayan Bruss, Andrew Gordon Wilson
ICMLW 2022 Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew Gordon Wilson
NeurIPS 2022 Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew G Wilson
ICMLW 2022 What Do We Maximize in Self-Supervised Learning? Ravid Shwartz-Ziv, Randall Balestriero, Yann LeCun
ICMLW 2021 Tabular Data: Deep Learning Is Not All You Need Ravid Shwartz-Ziv, Amitai Armon