Noci, Lorenzo

22 publications

NeurIPS 2025 Don't Be Lazy: CompleteP Enables Compute-Efficient Deep Transformers Nolan Simran Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, Joel Hestness
ICML 2025 The Importance of Being Lazy: Scaling Limits of Continual Learning Jacopo Graldi, Alessandro Breccia, Giulia Lanzillotta, Thomas Hofmann, Lorenzo Noci
ICLR 2024 Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, Cengiz Pehlevan
NeurIPSW 2024 Exploring the Limits of Feature Learning in Continual Learning Jacopo Graldi, Giulia Lanzillotta, Lorenzo Noci, Benjamin F Grewe, Thomas Hofmann
ICMLW 2024 Feature Learning Dynamics Under Grokking in a Sparse Parity Task Javier Sanguino Bautiste, Gregor Bachmann, Bobby He, Lorenzo Noci, Thomas Hofmann
AISTATS 2024 How Good Is a Single Basin? Kai Lion, Lorenzo Noci, Thomas Hofmann, Gregor Bachmann
NeurIPS 2024 Super Consistency of Neural Network Landscapes and Learning Rate Transfer Lorenzo Noci, Alexandru Meterez, Thomas Hofmann, Antonio Orvieto
ICMLW 2024 Understanding and Minimising Outlier Features in Neural Network Training Bobby He, Lorenzo Noci, Daniele Paliotta, Imanol Schlag, Thomas Hofmann
ICMLW 2024 Understanding and Minimising Outlier Features in Neural Network Training Bobby He, Lorenzo Noci, Daniele Paliotta, Imanol Schlag, Thomas Hofmann
NeurIPS 2024 Understanding and Minimising Outlier Features in Transformer Training Bobby He, Lorenzo Noci, Daniele Paliotta, Imanol Schlag, Thomas Hofmann
CVPR 2023 Achieving a Better Stability-Plasticity Trade-Off via Auxiliary Networks in Continual Learning Sanghwan Kim, Lorenzo Noci, Antonio Orvieto, Thomas Hofmann
NeurIPSW 2023 Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit Blake Bordelon, Lorenzo Noci, Mufan Li, Boris Hanin, Cengiz Pehlevan
NeurIPSW 2023 Disentangling Linear Mode Connectivity Gül Sena Altıntaş, Gregor Bachmann, Lorenzo Noci, Thomas Hofmann
NeurIPS 2023 Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci, Aurelien Lucchi, Thomas Hofmann
NeurIPSW 2023 How Good Is a Single Basin? Kai Lion, Gregor Bachmann, Lorenzo Noci, Thomas Hofmann
ICLR 2023 The Curious Case of Benign Memorization Sotiris Anagnostidis, Gregor Bachmann, Lorenzo Noci, Thomas Hofmann
NeurIPS 2023 The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit Lorenzo Noci, Chuning Li, Mufan Li, Bobby He, Thomas Hofmann, Chris J Maddison, Dan Roy
NeurIPSW 2022 Achieving a Better Stability-Plasticity Trade-Off via Auxiliary Networks in Continual Learning Sanghwan Kim, Lorenzo Noci, Antonio Orvieto, Thomas Hofmann
ICML 2022 How Tempering Fixes Data Augmentation in Bayesian Neural Networks Gregor Bachmann, Lorenzo Noci, Thomas Hofmann
NeurIPS 2022 Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse Lorenzo Noci, Sotiris Anagnostidis, Luca Biggio, Antonio Orvieto, Sidak Pal Singh, Aurelien Lucchi
NeurIPS 2021 Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect Lorenzo Noci, Kevin Roth, Gregor Bachmann, Sebastian Nowozin, Thomas Hofmann
NeurIPS 2021 Precise Characterization of the Prior Predictive Distribution of Deep ReLU Networks Lorenzo Noci, Gregor Bachmann, Kevin Roth, Sebastian Nowozin, Thomas Hofmann