Adlam, Ben

13 publications

TMLR 2024 Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T Parisi, Abhishek Kumar, Alexander A Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura A Culp, Lechao Xiao, Maxwell Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel
ICLR 2024 Small-Scale Proxies for Large-Scale Transformer Training Instabilities Mitchell Wortsman, Peter J Liu, Lechao Xiao, Katie E Everett, Alexander A Alemi, Ben Adlam, John D Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, Simon Kornblith
AISTATS 2022 A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions Ben Adlam, Jake A. Levinson, Jeffrey Pennington
TMLR 2022 Ensembles of Classifiers: A Bias-Variance Perspective Neha Gupta, Jamie Smith, Ben Adlam, Zelda E Mariet
NeurIPS 2022 Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions Courtney Paquette, Elliot Paquette, Ben Adlam, Jeffrey Pennington
JMLR 2022 Underspecification Presents Challenges for Credibility in Modern Machine Learning Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
ICLR 2021 Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek
NeurIPS 2021 Overparameterization Improves Robustness to Covariate Shift in High Dimensions Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington
NeurIPS 2020 Finite Versus Infinite Neural Networks: An Empirical Study Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein
ICML 2020 The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization Ben Adlam, Jeffrey Pennington
NeurIPS 2020 The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks Wei Hu, Lechao Xiao, Ben Adlam, Jeffrey Pennington
NeurIPS 2020 Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition Ben Adlam, Jeffrey Pennington
NeurIPS 2019 Learning GANs and Ensembles Using Discrepancy Ben Adlam, Corinna Cortes, Mehryar Mohri, Ningshan Zhang