Pesce, Luca

10 publications

AISTATS 2025 A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities Yatin Dandi, Luca Pesce, Hugo Cui, Florent Krzakala, Yue Lu, Bruno Loureiro
NeurIPS 2025 The Computational Advantage of Depth in Learning High-Dimensional Hierarchical Targets Yatin Dandi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
ICML 2024 Asymptotics of Feature Learning in Two-Layer Networks After One Gradient-Step Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue Lu, Lenka Zdeborova, Bruno Loureiro
JMLR 2024 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICML 2024 Online Learning and Information Exponents: The Importance of Batch Size & Time/Complexity Tradeoffs Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICMLW 2024 Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Luca Pesce, Ludovic Stephan
ICML 2024 The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
ICML 2023 Are Gaussian Data All You Need? the Extents and Limits of Universality in High-Dimensional Generalized Linear Estimation Luca Pesce, Florent Krzakala, Bruno Loureiro, Ludovic Stephan
NeurIPSW 2023 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
NeurIPS 2022 Subspace Clustering in High-Dimensions: Phase Transitions & Statistical-to-Computational Gap Luca Pesce, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová