Krzakala, Florent

66 publications

AISTATS 2025 A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs Kasimir Tanner, Matteo Vilucchio, Bruno Loureiro, Florent Krzakala
AISTATS 2025 A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities Yatin Dandi, Luca Pesce, Hugo Cui, Florent Krzakala, Yue Lu, Bruno Loureiro
NeurIPS 2025 Asymptotics of SGD in Sequence-Single Index Models and Single-Layer Attention Networks Luca Arnaboldi, Bruno Loureiro, Ludovic Stephan, Florent Krzakala, Lenka Zdeborova
AISTATS 2025 Fundamental Computational Limits of Weak Learnability in High-Dimensional Multi-Index Models Emanuele Troiani, Yatin Dandi, Leonardo Defilippis, Lenka Zdeborova, Bruno Loureiro, Florent Krzakala
ICML 2025 Fundamental Limits of Learning in Sequence Multi-Index Models and Deep Attention Networks: High-Dimensional Asymptotics and Sharp Thresholds Emanuele Troiani, Hugo Cui, Yatin Dandi, Florent Krzakala, Lenka Zdeborova
COLT 2025 Fundamental Limits of Matrix Sensing: Exact Asymptotics, Universality, and Applications Yizhou Xu, Antoine Maillard, Lenka Zdeborová, Florent Krzakala
NeurIPS 2025 Learning with Restricted Boltzmann Machines: Asymptotics of AMP and GD in High Dimensions Yizhou Xu, Florent Krzakala, Lenka Zdeborova
NeurIPS 2025 Optimal Spectral Transitions in High-Dimensional Multi-Index Models Leonardo Defilippis, Yatin Dandi, Pierre Mergny, Florent Krzakala, Bruno Loureiro
NeurIPS 2025 The Computational Advantage of Depth in Learning High-Dimensional Hierarchical Targets Yatin Dandi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
NeurIPS 2025 The Nuclear Route: Sharp Asymptotics of ERM in Overparameterized Quadratic Networks Vittorio Erba, Emanuele Troiani, Lenka Zdeborova, Florent Krzakala
NeurIPS 2024 A Phase Transition Between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborová
ICMLW 2024 A Phase Transition Between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborova
UAI 2024 Analysis of Bootstrap and Subsampling in High-Dimensional Regularized Regression Lucas Clarté, Adrien Vandenbroucque, Guillaume Dalle, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
ICLR 2024 Analysis of Learning a Flow-Based Generative Model from Limited Sample Complexity Hugo Cui, Florent Krzakala, Eric Vanden-Eijnden, Lenka Zdeborova
AISTATS 2024 Asymptotic Characterisation of the Performance of Robust Linear Regression in the Presence of Outliers Matteo Vilucchio, Emanuele Troiani, Vittorio Erba, Florent Krzakala
ICML 2024 Asymptotics of Feature Learning in Two-Layer Networks After One Gradient-Step Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue Lu, Lenka Zdeborova, Bruno Loureiro
NeurIPS 2024 Bayes-Optimal Learning of an Extensive-Width Neural Network from Quadratically Many Samples Antoine Maillard, Emanuele Troiani, Simon Martin, Lenka Zdeborová, Florent Krzakala
COLT 2024 Fundamental Limits of Non-Linear Low-Rank Matrix Estimation Pierre Mergny, Justin Ko, Florent Krzakala, Lenka Zdeborová
ICMLW 2024 Fundamental Limits of Weak Learnability in High-Dimensional Multi-Index Models Emanuele Troiani, Yatin Dandi, Leonardo Defilippis, Lenka Zdeborova, Bruno Loureiro, Florent Krzakala
JMLR 2024 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICML 2024 Online Learning and Information Exponents: The Importance of Batch Size & Time/Complexity Tradeoffs Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICMLW 2024 Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Luca Pesce, Ludovic Stephan
ICML 2024 Spectral Phase Transition and Optimal PCA in Block-Structured Spiked Models Pierre Mergny, Justin Ko, Florent Krzakala
ICML 2024 The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
ICML 2023 Are Gaussian Data All You Need? the Extents and Limits of Universality in High-Dimensional Generalized Linear Estimation Luca Pesce, Florent Krzakala, Bruno Loureiro, Ludovic Stephan
ICML 2023 Bayes-Optimal Learning of Deep Random Networks of Extensive-Width Hugo Cui, Florent Krzakala, Lenka Zdeborova
NeurIPSW 2023 Escaping Mediocrity: How Two-Layer Networks Learn Hard Generalized Linear Models Luca Arnaboldi, Florent Krzakala, Bruno Loureiro, Ludovic Stephan
UAI 2023 Expectation Consistency for Calibration of Neural Networks Lucas Clarté, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
COLT 2023 From High-Dimensional & Mean-Field Dynamics to Dimensionless ODEs: A Unifying Approach to SGD in Two-Layers Networks Luca Arnaboldi, Ludovic Stephan, Florent Krzakala, Bruno Loureiro
NeurIPSW 2023 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
AISTATS 2023 On Double-Descent in Uncertainty Quantification in Overparametrized Models Lucas Clarte, Bruno Loureiro, Florent Krzakala, Lenka Zdeborova
NeurIPS 2023 Optimal Algorithms for the Inhomogeneous Spiked Wigner Model Aleksandr Pak, Justin Ko, Florent Krzakala
JMLR 2023 Tree-AMP: Compositional Inference with Tree Approximate Message Passing Antoine Baker, Florent Krzakala, Benjamin Aubin, Lenka Zdeborová
NeurIPS 2023 Universality Laws for Gaussian Mixtures in Generalized Linear Models Yatin Dandi, Ludovic Stephan, Florent Krzakala, Bruno Loureiro, Lenka Zdeborová
ICML 2022 Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension Bruno Loureiro, Cedric Gerbelot, Maria Refinetti, Gabriele Sicuro, Florent Krzakala
NeurIPS 2022 Multi-Layer State Evolution Under Random Convolutional Design Mara Daniels, Cedric Gerbelot, Florent Krzakala, Lenka Zdeborová
NeurIPS 2022 Phase Diagram of Stochastic Gradient Descent in High-Dimensional Two-Layer Neural Networks Rodrigo Veiga, Ludovic Stephan, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
NeurIPS 2022 Subspace Clustering in High-Dimensions: Phase Transitions & Statistical-to-Computational Gap Luca Pesce, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
ICML 2021 Classifying High-Dimensional Gaussian Mixtures: Where Kernel Methods Fail and Neural Networks Succeed Maria Refinetti, Sebastian Goldt, Florent Krzakala, Lenka Zdeborova
NeurIPS 2021 Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
NeurIPS 2021 Learning Curves of Generic Features Maps for Realistic Datasets with a Teacher-Student Model Bruno Loureiro, Cedric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mezard, Lenka Zdeborová
NeurIPS 2021 Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-Dimensions Bruno Loureiro, Gabriele Sicuro, Cedric Gerbelot, Alessandro Pacco, Florent Krzakala, Lenka Zdeborová
COLT 2020 Asymptotic Errors for High-Dimensional Convex Penalized Linear Regression Beyond Gaussian Matrices Cédric Gerbelot, Alia Abbara, Florent Krzakala
NeurIPS 2020 Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborová
NeurIPS 2020 Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures Julien Launay, Iacopo Poli, François Boniface, Florent Krzakala
ICML 2020 Double Trouble in Double Descent: Bias and Variance(s) in the Lazy Regime Stéphane D’Ascoli, Maria Refinetti, Giulio Biroli, Florent Krzakala
NeurIPS 2020 Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification Francesca Mignacco, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborová
ICML 2020 Generalisation Error in Learning with Random Features and the Hidden Manifold Model Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mezard, Lenka Zdeborova
NeurIPS 2020 Generalization Error in High-Dimensional Perceptrons: Approaching Bayes Error with Convex Optimization Benjamin Aubin, Florent Krzakala, Yue Lu, Lenka Zdeborová
NeurIPS 2020 Phase Retrieval in High Dimensions: Statistical and Computational Phase Transitions Antoine Maillard, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
NeurIPS 2020 Reservoir Computing Meets Recurrent Kernels and Structured Transforms Jonathan Dong, Ruben Ohana, Mushegh Rafayelyan, Florent Krzakala
ICML 2020 The Role of Regularization in Classification of High-Dimensional Noisy Gaussian Mixture Francesca Mignacco, Florent Krzakala, Yue Lu, Pierfrancesco Urbani, Lenka Zdeborova
NeurIPS 2019 Dynamics of Stochastic Gradient Descent for Two-Layer Neural Networks in the Teacher-Student Setup Sebastian Goldt, Madhu Advani, Andrew M Saxe, Florent Krzakala, Lenka Zdeborová
ICML 2019 Passed & Spurious: Descent Algorithms and Local Minima in Spiked Matrix-Tensor Models Stefano Sarao Mannelli, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborova
NeurIPSW 2019 Precise Asymptotics for Phase Retrieval and Compressed Sensing with Random Generative Priors Benjamin Aubin, Bruno Loureiro, Antoine Baker, Florent Krzakala, Lenka Zdeborova
NeurIPS 2019 The Spiked Matrix Model with Generative Priors Benjamin Aubin, Bruno Loureiro, Antoine Maillard, Florent Krzakala, Lenka Zdeborová
NeurIPS 2019 Who Is Afraid of Big Bad Minima? Analysis of Gradient-Flow in Spiked Matrix-Tensor Models Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Lenka Zdeborová
NeurIPS 2018 Entropy and Mutual Information in Models of Deep Neural Networks Marylou Gabrié, Andre Manoel, Clément Luneau, Jean Barbier, Nicolas Macris, Florent Krzakala, Lenka Zdeborová
COLT 2018 Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, Lenka Zdeborová
NeurIPS 2018 The Committee Machine: Computational to Statistical Gaps in Learning a Two-Layers Neural Network Benjamin Aubin, Antoine Maillard, Jean Barbier, Florent Krzakala, Nicolas Macris, Lenka Zdeborová
NeurIPS 2016 Mutual Information for Symmetric Rank-One Matrix Estimation: A Proof of the Replica Formula Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault Lesieur, Lenka Zdeborová
NeurIPS 2015 Matrix Completion from Fewer Entries: Spectral Detectability and Rank Estimation Alaa Saade, Florent Krzakala, Lenka Zdeborová
ICML 2015 Swept Approximate Message Passing for Sparse Estimation Andre Manoel, Florent Krzakala, Eric Tramel, Lenka Zdeborovà
NeurIPS 2015 Training Restricted Boltzmann Machine via the Thouless-Anderson-Palmer Free Energy Marylou Gabrie, Eric W Tramel, Florent Krzakala
NeurIPS 2014 Spectral Clustering of Graphs with the Bethe Hessian Alaa Saade, Florent Krzakala, Lenka Zdeborová
NeurIPS 2013 Blind Calibration in Compressed Sensing Using Message Passing Algorithms Christophe Schulke, Francesco Caltagirone, Florent Krzakala, Lenka Zdeborová