ML Anthology
Authors
Search
About
Telgarsky, Matus
29 publications
ICML
2025
Benefits of Early Stopping in Gradient Descent for Overparameterized Logistic Regression
Jingfeng Wu
,
Peter Bartlett
,
Matus Telgarsky
,
Bin Yu
COLT
2024
Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency
Jingfeng Wu
,
Peter L. Bartlett
,
Matus Telgarsky
,
Bin Yu
AISTATS
2024
Spectrum Extraction and Clipping for Implicitly Linear Layers
Ali Ebrahimpour Boroojeny
,
Matus Telgarsky
,
Hari Sundaram
ICML
2024
Transformers, Parallel Computation, and Logarithmic Depth
Clayton Sanford
,
Daniel Hsu
,
Matus Telgarsky
ICLR
2023
Feature Selection and Low Test Error in Shallow Low-Rotation ReLU Networks
Matus Telgarsky
ICLR
2023
On Achieving Optimal Adversarial Test Error
Justin D. Li
,
Matus Telgarsky
NeurIPSW
2023
Spectrum Extraction and Clipping for Implicitly Linear Layers
Ali Ebrahimpour-Boroojeny
,
Matus Telgarsky
,
Hari Sundaram
ICLR
2022
Actor-Critic Is Implicitly Biased Towards High Entropy Optimal Policies
Yuzheng Hu
,
Ziwei Ji
,
Matus Telgarsky
COLT
2022
Stochastic Linear Optimization Never Overfits with Quadratically-Bounded Losses on General Data
Matus Telgarsky
ALT
2021
Characterizing the Implicit Bias via a Primal-Dual Analysis
Ziwei Ji
,
Matus Telgarsky
ICML
2021
Fast Margin Maximization via Dual Acceleration
Ziwei Ji
,
Nathan Srebro
,
Matus Telgarsky
ICLR
2021
Generalization Bounds via Distillation
Daniel Hsu
,
Ziwei Ji
,
Matus Telgarsky
,
Lan Wang
COLT
2020
Gradient Descent Follows the Regularization Path for General Losses
Ziwei Ji
,
Miroslav Dudík
,
Robert E. Schapire
,
Matus Telgarsky
ICLR
2020
Neural Tangent Kernels, Transportation Mappings, and Universal Approximation
Ziwei Ji
,
Matus Telgarsky
,
Ruicheng Xian
ICLR
2020
Polylogarithmic Width Suffices for Gradient Descent to Achieve Arbitrarily Small Test Error with Shallow ReLU Networks
Ziwei Ji
,
Matus Telgarsky
ICML
2019
A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization
Yucheng Chen
,
Matus Telgarsky
,
Chao Zhang
,
Bolton Bailey
,
Daniel Hsu
,
Jian Peng
ICLR
2019
Gradient Descent Aligns the Layers of Deep Linear Networks
Ziwei Ji
,
Matus Telgarsky
COLT
2019
The Implicit Bias of Gradient Descent on Nonseparable Data
Ziwei Ji
,
Matus Telgarsky
ICML
2017
Neural Networks and Rational Functions
Matus Telgarsky
COLT
2017
Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis
Maxim Raginsky
,
Alexander Rakhlin
,
Matus Telgarsky
COLT
2016
Benefits of Depth in Neural Networks
Matus Telgarsky
COLT
2015
Convex Risk Minimization and Conditional Probability Estimation
Matus Telgarsky
,
Miroslav Dudík
ALT
2015
Tensor Decompositions for Learning Latent Variable Models (a Survey for ALT)
Anima Anandkumar
,
Rong Ge
,
Daniel J. Hsu
,
Sham M. Kakade
,
Matus Telgarsky
JMLR
2014
Tensor Decompositions for Learning Latent Variable Models
Animashree Anandkumar
,
Rong Ge
,
Daniel Hsu
,
Sham M. Kakade
,
Matus Telgarsky
COLT
2013
Boosting with the Logistic Loss Is Consistent
Matus Telgarsky
ICML
2013
Margins, Shrinkage, and Boosting
Matus Telgarsky
JMLR
2012
A Primal-Dual Convergence Analysis of Boosting
Matus Telgarsky
ICML
2012
Agglomerative Bregman Clustering
Matus Telgarsky
,
Sanjoy Dasgupta
AISTATS
2010
Hartigan’s Method: K-Means Clustering Without Voronoi
Matus Telgarsky
,
Andrea Vattani