Dandi, Yatin

19 publications

AISTATS 2025 A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities Yatin Dandi, Luca Pesce, Hugo Cui, Florent Krzakala, Yue Lu, Bruno Loureiro
AISTATS 2025 Fundamental Computational Limits of Weak Learnability in High-Dimensional Multi-Index Models Emanuele Troiani, Yatin Dandi, Leonardo Defilippis, Lenka Zdeborova, Bruno Loureiro, Florent Krzakala
ICML 2025 Fundamental Limits of Learning in Sequence Multi-Index Models and Deep Attention Networks: High-Dimensional Asymptotics and Sharp Thresholds Emanuele Troiani, Hugo Cui, Yatin Dandi, Florent Krzakala, Lenka Zdeborova
NeurIPS 2025 Optimal Spectral Transitions in High-Dimensional Multi-Index Models Leonardo Defilippis, Yatin Dandi, Pierre Mergny, Florent Krzakala, Bruno Loureiro
NeurIPS 2025 The Computational Advantage of Depth in Learning High-Dimensional Hierarchical Targets Yatin Dandi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
ICML 2024 Asymptotics of Feature Learning in Two-Layer Networks After One Gradient-Step Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue Lu, Lenka Zdeborova, Bruno Loureiro
ICMLW 2024 Fundamental Limits of Weak Learnability in High-Dimensional Multi-Index Models Emanuele Troiani, Yatin Dandi, Leonardo Defilippis, Lenka Zdeborova, Bruno Loureiro, Florent Krzakala
JMLR 2024 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICML 2024 Online Learning and Information Exponents: The Importance of Batch Size & Time/Complexity Tradeoffs Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
ICMLW 2024 Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Luca Pesce, Ludovic Stephan
ICML 2024 The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenka Zdeborova, Florent Krzakala
NeurIPSW 2023 How Two-Layer Neural Networks Learn, One (Giant) Step at a Time Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
NeurIPSW 2023 Learning from Setbacks: The Impact of Adversarial Initialization on Generalization Performance Kavya Ravichandran, Yatin Dandi, Stefani Karp, Francesca Mignacco
NeurIPS 2023 Universality Laws for Gaussian Mixtures in Generalized Linear Models Yatin Dandi, Ludovic Stephan, Florent Krzakala, Bruno Loureiro, Lenka Zdeborová
NeurIPSW 2022 Data-Heterogeneity-Aware Mixing for Decentralized Learning Yatin Dandi, Anastasia Koloskova, Martin Jaggi, Sebastian U Stich
AAAI 2022 Implicit Gradient Alignment in Distributed and Federated Learning Yatin Dandi, Luis Barba, Martin Jaggi
AAAI 2021 Generalized Adversarially Learned Inference Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai
NeurIPSW 2021 NeurInt: Learning to Interpolate Through Neural ODEs Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai
WACV 2020 Jointly Trained Image and Video Generation Using Residual Vectors Yatin Dandi, Aniket Das, Soumye Singhal, Vinay Namboodiri, Piyush Rai