Bergsma, Shane

10 publications

NeurIPS 2025 Don't Be Lazy: CompleteP Enables Compute-Efficient Deep Transformers Nolan Simran Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, Joel Hestness
NeurIPS 2025 Power Lines: Scaling Laws for Weight Decay and Batch Size in LLM Pre-Training Shane Bergsma, Nolan Simran Dey, Gurpreet Gosal, Gavia Gray, Daria Soboleva, Joel Hestness
ICLR 2025 Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs Shane Bergsma, Nolan Simran Dey, Gurpreet Gosal, Gavia Gray, Daria Soboleva, Joel Hestness
NeurIPSW 2024 Empirical Upper Bounds for Unstructured Sparsity in Compute-Efficient Language Modeling Esha Singh, Shane Bergsma, Nolan Simran Dey, Joel Hestness, Gavia Gray
NeurIPS 2024 Normalization Layer Per-Example Gradients Are Sufficient to Predict Gradient Noise Scale in Transformers Gavia Gray, Aman Tiwari, Shane Bergsma, Joel Hestness
NeurIPS 2024 Sparse Maximal Update Parameterization: A Holistic Approach to Sparse Training Dynamics Nolan Dey, Shane Bergsma, Joel Hestness
NeurIPS 2023 SutraNets: Sub-Series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting Shane Bergsma, Tim Zeyl, Lei Guo
NeurIPS 2022 C2FAR: Coarse-to-Fine Autoregressive Networks for Precise Probabilistic Forecasting Shane Bergsma, Tim Zeyl, Javad Rahimipour Anaraki, Lei Guo
IJCAI 2011 Learning Bilingual Lexicons Using the Visual Similarity of Labeled Web Images Shane Bergsma, Benjamin Van Durme
IJCAI 2009 Web-Scale N-Gram Models for Lexical Disambiguation Shane Bergsma, Dekang Lin, Randy Goebel