Bach, Francis
111 publications
JMLR
2025
Enhanced Feature Learning via Regularisation: Integrating Neural Networks and Kernel Methods
NeurIPS
2025
Kernel Learning with Adversarial Features: Numerical Efficiency and Adaptive Regularization
NeurIPS
2025
Scaling Laws for Gradient Descent and Sign Descent for Linear Bigram Models Under Zipf’s Law
AISTATS
2024
On the Impact of Overparameterization on the Training of a Shallow Neural Network in High Dimensions
COLT
2020
Implicit Bias of Gradient Descent for Wide Two-Layer Neural Networks Trained with the Logistic Loss
AISTATS
2020
Statistical Estimation of the Poincaré Constant and Application to Sampling Multimodal Distributions
AISTATS
2019
Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
AISTATS
2019
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
NeurIPS
2019
Globally Convergent Newton Methods for Ill-Conditioned Generalized Self-Concordant Losses
COLT
2019
Stochastic First-Order Methods: Non-Asymptotic and Computer-Aided Analyses via Potential Functions
NeurIPS
2019
UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization
NeurIPS
2018
Efficient Algorithms for Non-Convex Isotonic Regression Through Submodular Optimization
NeurIPS
2018
On the Global Convergence of Gradient Descent for Over-Parameterized Models Using Optimal Transport
NeurIPS
2018
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems Through Multiple Passes
JMLR
2014
Adaptivity of Averaged Stochastic Gradient Descent to Local Strong Convexity for Logistic Regression
NeurIPS
2014
SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives