Mondelli, Marco

37 publications

NeurIPS 2025 Attention with Trained Embeddings Provably Selects Important Tokens Diyuan Wu, Aleksandr Shevchenko, Samet Oymak, Marco Mondelli
ICLR 2025 High-Dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws Muhammed Emrullah Ildiz, Halil Alperen Gozeten, Ege Onur Taga, Marco Mondelli, Samet Oymak
ICML 2025 Neural Collapse Beyond the Unconstrained Features Model: Landscape, Dynamics, and Generalization in the Mean-Field Regime Diyuan Wu, Marco Mondelli
NeurIPS 2025 Neural Collapse Is Globally Optimal in Deep Regularized ResNets and Transformers Peter Súkeník, Christoph H. Lampert, Marco Mondelli
COLT 2025 Spectral Estimators for Multi-Index Models: Precise Asymptotics and Optimal Weak Recovery Filip Kovačević, Zhang Yihan, Marco Mondelli
ICML 2025 Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization Simone Bombari, Marco Mondelli
ICLRW 2025 Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization Simone Bombari, Marco Mondelli
ICML 2025 Test-Time Training Provably Improves Transformers as In-Context Learners Halil Alperen Gozeten, Muhammed Emrullah Ildiz, Xuechen Zhang, Mahdi Soltanolkotabi, Marco Mondelli, Samet Oymak
ICLR 2025 Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse Arthur Jacot, Peter Súkeník, Zihan Wang, Marco Mondelli
NeurIPS 2024 Average Gradient Outer Product as a Mechanism for Deep Neural Collapse Daniel Beaglehole, Peter Súkeník, Marco Mondelli, Mikhail Belkin
ICML 2024 Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth Kevin Kögler, Aleksandr Shevchenko, Hamed Hassani, Marco Mondelli
COLT 2024 Contraction of Markovian Operators in Orlicz Spaces and Error Bounds for Markov Chain Monte Carlo (Extended Abstract) Amedeo Roberto Esposito, Marco Mondelli
ICML 2024 How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features Simone Bombari, Marco Mondelli
TMLR 2024 Improved Convergence of Score-Based Diffusion Models via Prediction-Correction Francesco Pedrotti, Jan Maas, Marco Mondelli
NeurIPS 2024 Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods Yihan Zhang, Marco Mondelli
ICMLW 2024 Neural Collapse Versus Low-Rank Bias: Is Deep Neural Collapse Really Optimal? Peter Súkeník, Marco Mondelli, Christoph H. Lampert
NeurIPS 2024 Neural Collapse vs. Low-Rank Bias: Is Deep Neural Collapse Really Optimal? Peter Súkeník, Christoph Lampert, Marco Mondelli
COLT 2024 Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing (Extended Abstract) Yihan Zhang, Hong Chang Ji, Ramji Venkataramanan, Marco Mondelli
ICML 2024 Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features Simone Bombari, Marco Mondelli
ICML 2023 Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels Simone Bombari, Shayan Kiyani, Marco Mondelli
NeurIPS 2023 Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model Peter Súkeník, Marco Mondelli, Christoph H. Lampert
ICML 2023 Fundamental Limits of Two-Layer Autoencoders, and Achieving Them with Gradient Methods Aleksandr Shevchenko, Kevin Kögler, Hamed Hassani, Marco Mondelli
TMLR 2023 Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli
NeurIPSW 2023 Privacy at Interpolation: Precise Analysis for Random and NTK Features Simone Bombari, Marco Mondelli
ICML 2022 Estimation in Rotationally Invariant Generalized Linear Models via Approximate Message Passing Ramji Venkataramanan, Kevin Kögler, Marco Mondelli
NeurIPSW 2022 Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli
JMLR 2022 Mean-Field Analysis of Piecewise Linear Solutions for Wide ReLU Networks Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli
NeurIPS 2022 Memorization and Optimization in Deep Neural Networks with Minimum Over-Parameterization Simone Bombari, Mohammad Hossein Amani, Marco Mondelli
NeurIPS 2022 The Price of Ignorance: How Much Does It Cost to Forget Noise Structure in Low-Rank Matrix Estimation? Jean Barbier, TianQi Hou, Marco Mondelli, Manuel Saenz
AISTATS 2021 Approximate Message Passing with Spectral Initialization for Generalized Linear Models Marco Mondelli, Ramji Venkataramanan
NeurIPS 2021 PCA Initialization for Approximate Message Passing in Rotationally Invariant Models Marco Mondelli, Ramji Venkataramanan
ICML 2021 Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks Quynh Nguyen, Marco Mondelli, Guido F Montufar
NeurIPS 2021 When Are Solutions Connected in Deep Networks? Quynh N Nguyen, Pierre Bréchet, Marco Mondelli
NeurIPS 2020 Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology Quynh N Nguyen, Marco Mondelli
ICML 2020 Landscape Connectivity and Dropout Stability of SGD Solutions for Over-Parameterized Neural Networks Alexander Shevchenko, Marco Mondelli
AISTATS 2019 On the Connection Between Learning Two-Layer Neural Networks and Tensor Decomposition Marco Mondelli, Andrea Montanari
COLT 2018 Fundamental Limits of Weak Recovery with Applications to Phase Retrieval Marco Mondelli, Andrea Montanari