Hestness, Joel

9 publications

NeurIPS 2025 Don't Be Lazy: CompleteP Enables Compute-Efficient Deep Transformers Nolan Simran Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, Joel Hestness
NeurIPS 2025 Power Lines: Scaling Laws for Weight Decay and Batch Size in LLM Pre-Training Shane Bergsma, Nolan Simran Dey, Gurpreet Gosal, Gavia Gray, Daria Soboleva, Joel Hestness
ICLR 2025 Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs Shane Bergsma, Nolan Simran Dey, Gurpreet Gosal, Gavia Gray, Daria Soboleva, Joel Hestness
ICMLW 2024 Bilingual Adaptation of Monolingual Foundation Models Gurpreet Gosal, Yishi Xu, Gokulakrishnan Ramakrishnan, Rituraj Joshi, Avraham Sheinin, Zhiming Chen, Biswajit Mishra, Sunil Kumar Sahu, Neha Sengupta, Natalia Vassilieva, Joel Hestness, Samujjwal Ghosh, Bokang Jia, Onkar Arun Pandit, Satheesh Katipomu, Samta Kamboj, Rahul Pal, Parvez Mullah, Soundar Balaji Doraiswamy, Karim Chami, Preslav Nakov
CPAL 2024 Efficiently Disentangle Causal Representations Yuanpeng Li, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church
NeurIPSW 2024 Empirical Upper Bounds for Unstructured Sparsity in Compute-Efficient Language Modeling Esha Singh, Shane Bergsma, Nolan Simran Dey, Joel Hestness, Gavia Gray
NeurIPS 2024 Normalization Layer Per-Example Gradients Are Sufficient to Predict Gradient Noise Scale in Transformers Gavia Gray, Aman Tiwari, Shane Bergsma, Joel Hestness
NeurIPS 2024 Sparse Maximal Update Parameterization: A Holistic Approach to Sparse Training Dynamics Nolan Dey, Shane Bergsma, Joel Hestness
NeurIPSW 2023 Efficient and Approximate Per-Example Gradient Norms for Gradient Noise Scale Gavia Gray, Anshul Samar, Joel Hestness