Pennington, Jeffrey

38 publications

ICML 2025 Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks Shikai Qiu, Lechao Xiao, Andrew Gordon Wilson, Jeffrey Pennington, Atish Agarwala
NeurIPS 2024 4+3 Phases of Compute-Optimal Neural Scaling Laws Elliot Paquette, Courtney Paquette, Lechao Xiao, Jeffrey Pennington
TMLR 2024 Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T Parisi, Abhishek Kumar, Alexander A Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura A Culp, Lechao Xiao, Maxwell Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel
NeurIPSW 2024 Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks Shikai Qiu, Atish Agarwala, Jeffrey Pennington, Lechao Xiao
ICML 2024 Scaling Exponents Across Parameterizations and Optimizers Katie E Everett, Lechao Xiao, Mitchell Wortsman, Alexander A Alemi, Roman Novak, Peter J Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington
ICLR 2024 Small-Scale Proxies for Large-Scale Transformer Training Instabilities Mitchell Wortsman, Peter J Liu, Lechao Xiao, Katie E Everett, Alexander A Alemi, Ben Adlam, John D Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, Simon Kornblith
TMLR 2024 Training LLMs over Neurally Compressed Text Brian Lester, Jaehoon Lee, Alexander A Alemi, Jeffrey Pennington, Adam Roberts, Jascha Sohl-Dickstein, Noah Constant
ICML 2023 Second-Order Regression Models Exhibit Progressive Sharpening to the Edge of Stability Atish Agarwala, Fabian Pedregosa, Jeffrey Pennington
TMLR 2023 Temperature Check: Theory and Practice for Training Models with SoftMax-Cross-Entropy Losses Atish Agarwala, Samuel Stern Schoenholz, Jeffrey Pennington, Yann Dauphin
AISTATS 2022 A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions Ben Adlam, Jake A. Levinson, Jeffrey Pennington
NeurIPSW 2022 A Second-Order Regression Model Shows Edge of Stability Behavior Fabian Pedregosa, Atish Agarwala, Jeffrey Pennington
ICLR 2022 Anisotropic Random Feature Regression in High Dimensions Gabriel Mel, Jeffrey Pennington
NeurIPS 2022 Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions Courtney Paquette, Elliot Paquette, Ben Adlam, Jeffrey Pennington
NeurIPS 2022 Precise Learning Curves and Higher-Order Scalings for Dot-Product Kernel Regression Lechao Xiao, Hong Hu, Theodor Misiakiewicz, Yue Lu, Jeffrey Pennington
ICML 2022 Synergy and Symmetry in Deep Learning: Interactions Between the Data, Model, and Inference Algorithm Lechao Xiao, Jeffrey Pennington
ICML 2022 Wide Bayesian Neural Networks Have a Simple Weight Posterior: Theory and Accelerated Sampling Jiri Hron, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein
ICLR 2021 Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek
NeurIPS 2021 Overparameterization Improves Robustness to Covariate Shift in High Dimensions Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington
ICML 2020 Disentangling Trainability and Generalization in Deep Neural Networks Lechao Xiao, Jeffrey Pennington, Samuel Schoenholz
NeurIPS 2020 Finite Versus Infinite Neural Networks: An Empirical Study Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein
ICLR 2020 Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks Wei Hu, Lechao Xiao, Jeffrey Pennington
ICML 2020 The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization Ben Adlam, Jeffrey Pennington
NeurIPS 2020 The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks Wei Hu, Lechao Xiao, Ben Adlam, Jeffrey Pennington
NeurIPS 2020 Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition Ben Adlam, Jeffrey Pennington
ICLR 2019 A Mean Field Theory of Batch Normalization Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, Samuel S. Schoenholz
ICLR 2019 Bayesian Deep Convolutional Networks with Many Channels Are Gaussian Processes Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-dickstein
AISTATS 2019 KAMA-NNs: Low-Dimensional Rotation Based Neural Networks Krzysztof Choromanski, Aldo Pacchiano, Jeffrey Pennington, Yunhao Tang
NeurIPS 2019 Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington
ICLR 2018 Deep Neural Networks as Gaussian Processes Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein
ICML 2018 Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, Jeffrey Pennington
ICML 2018 Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks Minmin Chen, Jeffrey Pennington, Samuel Schoenholz
ICLR 2018 Sensitivity and Generalization in Neural Networks: An Empirical Study Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein
AISTATS 2018 The Emergence of Spectral Universality in Deep Networks Jeffrey Pennington, Samuel S. Schoenholz, Surya Ganguli
NeurIPS 2018 The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network Jeffrey Pennington, Pratik Worah
ICML 2017 Geometry of Neural Network Loss Surfaces via Random Matrix Theory Jeffrey Pennington, Yasaman Bahri
NeurIPS 2017 Nonlinear Random Matrix Theory for Deep Learning Jeffrey Pennington, Pratik Worah
NeurIPS 2017 Resurrecting the Sigmoid in Deep Learning Through Dynamical Isometry: Theory and Practice Jeffrey Pennington, Samuel Schoenholz, Surya Ganguli
NeurIPS 2015 Spherical Random Features for Polynomial Kernels Jeffrey Pennington, Felix Xinnan X Yu, Sanjiv Kumar