Chizat, Lénaïc

16 publications

NeurIPS 2024 Deep Linear Networks for Regression Are Implicitly Regularized Towards Flat Minima Pierre Marion, Lénaïc Chizat
NeurIPS 2024 Mean-Field Langevin Dynamics for Signed Measures via a Bilevel Approach Guillaume Wang, Alireza Mousavi-Hosseini, Lénaïc Chizat
JMLR 2024 On the Effect of Initialization: The Scaling Path of 2-Layer Neural Networks Sebastian Neumayer, Lénaïc Chizat, Michael Unser
COLT 2024 Open Problem: Convergence of Single-Timescale Mean-Field Langevin Descent-Ascent for Two-Player Zero-Sum Games Guillaume Wang, Lénaïc Chizat
NeurIPS 2024 The Feature Speed Formula: A Flexible Approach to Scale Hyper-Parameters of Deep Neural Networks Lénaïc Chizat, Praneeth Netrapalli
JMLR 2024 Training Integrable Parameterizations of Deep Neural Networks in the Infinite-Width Limit Karl Hajjar, Lénaïc Chizat, Christophe Giraud
NeurIPS 2023 Computational Guarantees for Doubly Entropic Wasserstein Barycenters Tomas Vaskevicius, Lénaïc Chizat
NeurIPS 2023 Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices Guillaume Wang, Lénaïc Chizat
TMLR 2022 Mean-Field Langevin Dynamics : Exponential Convergence and Annealing Lénaïc Chizat
NeurIPS 2022 Trajectory Inference via Mean-Field Langevin in Path Space Lénaïc Chizat, Stephen Zhang, Matthieu Heitz, Geoffrey Schiebinger
NeurIPS 2020 Faster Wasserstein Distance Estimation with the Sinkhorn Divergence Lénaïc Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré
COLT 2020 Implicit Bias of Gradient Descent for Wide Two-Layer Neural Networks Trained with the Logistic Loss Lénaïc Chizat, Francis Bach
NeurIPS 2020 Statistical and Topological Properties of Sliced Probability Divergences Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Simsekli
NeurIPS 2019 On Lazy Training in Differentiable Programming Lénaïc Chizat, Edouard Oyallon, Francis Bach
AISTATS 2019 Sample Complexity of Sinkhorn Divergences Aude Genevay, Lénaïc Chizat, Francis Bach, Marco Cuturi, Gabriel Peyré
NeurIPS 2018 On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport Lénaïc Chizat, Francis Bach