Mokhtari, Aryan
63 publications
NeurIPS
2025
Affine-Invariant Global Non-Asymptotic Convergence Analysis of BFGS Under Self-Concordance
NeurIPS
2025
On the Complexity of Finding Stationary Points in Nonconvex Simple Bilevel Optimization
NeurIPS
2024
In-Context Learning with Transformers: SoftMax Attention Adapts to Function Lipschitzness
AISTATS
2024
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
TMLR
2024
Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models
AISTATS
2023
A Conditional Gradient-Based Method for Simple Bilevel Optimization with Convex Lower-Level Problem
NeurIPS
2023
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization
NeurIPS
2023
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-Level Problem
NeurIPSW
2022
Conditional Gradient-Based Method for Bilevel Optimization with Convex Lower-Level Problem
ICML
2022
Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood
COLT
2022
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance
NeurIPS
2021
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach
AISTATS
2020
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate
AISTATS
2020
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
NeurIPS
2020
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
NeurIPS
2020
Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking
JMLR
2020
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization