Wyart, Matthieu

19 publications

ICML 2025 How Compositional Generalization and Creativity Improve as Diffusion Models Are Trained Alessandro Favero, Antonio Sclocchi, Francesco Cagnetta, Pascal Frossard, Matthieu Wyart
ICLRW 2025 How Compositional Generalization and Creativity Improve as Diffusion Models Are Trained Alessandro Favero, Antonio Sclocchi, Francesco Cagnetta, Pascal Frossard, Matthieu Wyart
ICML 2025 Learning Curves Theory for Hierarchically Compositional Data with Power-Law Distributed Features Francesco Cagnetta, Hyunmo Kang, Matthieu Wyart
NeurIPS 2025 On the Emergence of Linear Analogies in Word Embeddings Daniel James Korchinski, Dhruva Karkada, Yasaman Bahri, Matthieu Wyart
ICLR 2025 Probing the Latent Hierarchical Structure of Data via Diffusion Models Antonio Sclocchi, Alessandro Favero, Noam Itzhak Levi, Matthieu Wyart
ICML 2024 How Deep Networks Learn Sparse and Hierarchical Data: The Sparse Random Hierarchy Model Umberto Maria Tomasini, Matthieu Wyart
NeurIPSW 2024 How Rare Events Shape the Learning Curves of Hierarchical Data Hyunmo Kang, Francesco Cagnetta, Matthieu Wyart
NeurIPSW 2024 Token-Token Correlations Predict the Scaling of the Test Loss with the Number of Input Tokens Francesco Cagnetta, Matthieu Wyart
NeurIPS 2024 Towards a Theory of How the Structure of Language Is Acquired by Deep Neural Networks Francesco Cagnetta, Matthieu Wyart
NeurIPSW 2024 Unraveling the Latent Hierarchical Structure of Language and Images via Diffusion Models Antonio Sclocchi, Noam Itzhak Levi, Alessandro Favero, Matthieu Wyart
NeurIPSW 2024 Unraveling the Latent Hierarchical Structure of Language and Images via Diffusion Models Antonio Sclocchi, Noam Itzhak Levi, Alessandro Favero, Matthieu Wyart
ICML 2023 Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning Antonio Sclocchi, Mario Geiger, Matthieu Wyart
ICLRW 2023 How Deep Convolutional Neural Networks Lose Spatial Information with Training Umberto Maria Tomasini, Leonardo Petrini, Francesco Cagnetta, Matthieu Wyart
ICML 2023 What Can Be Learnt with Wide Convolutional Neural Networks? Francesco Cagnetta, Alessandro Favero, Matthieu Wyart
ICML 2022 Failure and Success of the Spectral Bias Prediction for Laplace Kernel Ridge Regression: The Case of Low-Dimensional Data Umberto M Tomasini, Antonio Sclocchi, Matthieu Wyart
NeurIPS 2022 Learning Sparse Features Can Lead to Overfitting in Neural Networks Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu Wyart
NeurIPS 2021 Locality Defeats the Curse of Dimensionality in Convolutional Teacher-Student Scenarios Alessandro Favero, Francesco Cagnetta, Matthieu Wyart
NeurIPS 2021 Relative Stability Toward Diffeomorphisms Indicates Performance in Deep Nets Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
ICML 2018 Comparing Dynamics: Deep Neural Networks Versus Glassy Systems Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, Gerard Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, Giulio Biroli