Dahl, George E.

9 publications

ICLR 2025 Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition Priya Kasimbeg, Frank Schneider, Runa Eschenhagen, Juhan Bae, Chandramouli Shama Sastry, Mark Saroufim, Boyuan Feng, Less Wright, Edward Z. Yang, Zachary Nado, Sourabh Medapati, Philipp Hennig, Michael Rabbat, George E. Dahl
TMLR 2025 How Far Away Are Truly Hyperparameter-Free Learning Algorithms? Priya Kasimbeg, Vincent Roulet, Naman Agarwal, Sourabh Medapati, Fabian Pedregosa, Atish Agarwala, George E. Dahl
JMLR 2024 Pre-Trained Gaussian Processes for Bayesian Optimization Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani
NeurIPSW 2023 Adaptive Gradient Methods at the Edge of Stability Jeremy Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, Zachary Nado, George E. Dahl, Justin Gilmer
JMLR 2019 Measuring the Effects of Data Parallelism on Neural Network Training Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl
ICLR 2018 Large Scale Distributed Neural Network Training Through Online Distillation Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton
ICML 2017 Neural Message Passing for Quantum Chemistry Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, George E. Dahl
ICML 2012 Training Restricted Boltzmann Machines on Word Observations George E. Dahl, Ryan Prescott Adams, Hugo Larochelle
UAI 2010 Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes Ryan Prescott Adams, George E. Dahl, Iain Murray