Sidford, Aaron

50 publications

NeurIPS 2025 Balancing Gradient and Hessian Queries in Non-Convex Optimization Deeksha Adil, Brian Bullins, Aaron Sidford, Chenyi Zhang
NeurIPS 2025 Isotropic Noise in Stochastic and Quantum Convex Optimization Annie Marsden, Liam O'Carroll, Aaron Sidford, Chenyi Zhang
COLT 2024 Closing the Computational-Query Depth Gap in Parallel Stochastic Convex Optimization Arun Jambulapati, Aaron Sidford, Kevin Tian
COLT 2024 Faster Spectral Density Estimation and Sparsification in the Nuclear Norm (Extended Abstract) Yujia Jin, Ishani Karmarkar, Christopher Musco, Aaron Sidford, Apoorv Vikram Singh
NeurIPS 2024 Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
NeurIPS 2024 Truncated Variance Reduced Value Iteration Yujia Jin, Ishani Karmarkar, Aaron Sidford, Jiayi Wang
IJCAI 2023 Efficient Convex Optimization Requires Superlinear Memory (Extended Abstract) Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant
COLT 2023 Moments, Random Walks, and Limits for Spectrum Approximation Yujia Jin, Christopher Musco, Aaron Sidford, Apoorv Vikram Singh
NeurIPS 2023 Parallel Submodular Function Minimization Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford
NeurIPS 2023 Quantum Speedups for Stochastic Optimization Aaron Sidford, Chenyi Zhang
ICML 2023 Quantum Speedups for Zero-Sum Games via Improved Dynamic Gibbs Sampling Adam Bouland, Yosheb M Getachew, Yujia Jin, Aaron Sidford, Kevin Tian
COLT 2023 Semi-Random Sparse Recovery in Nearly-Linear Time Jonathan Kelner, Jerry Li, Allen X. Liu, Aaron Sidford, Kevin Tian
NeurIPS 2023 Structured Semidefinite Programming for Recovering Structured Preconditioners Arun Jambulapati, Jerry Li, Christopher Musco, Kirankumar Shiragur, Aaron Sidford, Kevin Tian
NeurIPS 2023 Towards Optimal Effective Resistance Estimation Rajat Vadiraj Dwaraknath, Ishani Karmarkar, Aaron Sidford
COLT 2022 Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan
COLT 2022 Efficient Convex Optimization Requires Superlinear Memory Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant
NeurIPS 2022 On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood Moses Charikar, Zhihao Jiang, Kirankumar Shiragur, Aaron Sidford
NeurIPS 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration Yair Carmon, Danielle Hausler, Arun Jambulapati, Yujia Jin, Aaron Sidford
ICML 2022 RECAPP: Crafting a More Efficient Catalyst for Convex Optimization Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
NeurIPSW 2022 Semi-Random Sparse Recovery in Nearly-Linear Time Jonathan Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian
COLT 2022 Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods Yujia Jin, Aaron Sidford, Kevin Tian
NeurIPS 2021 Stochastic Bias-Reduced Gradient Methods Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
COLT 2021 The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood Nima Anari, Moses Charikar, Kirankumar Shiragur, Aaron Sidford
COLT 2021 Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
ICML 2021 Towards Tight Bounds on the Sample Complexity of Average-Reward MDPs Yujia Jin, Aaron Sidford
NeurIPS 2020 Acceleration with a Ball Optimization Oracle Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, Aaron Sidford, Kevin Tian
ICML 2020 Efficiently Solving MDPs with Stochastic Mirror Descent Yujia Jin, Aaron Sidford
NeurIPS 2020 Instance Based Approximations to Profile Maximum Likelihood Nima Anari, Moses Charikar, Kirankumar Shiragur, Aaron Sidford
NeurIPS 2020 Large-Scale Methods for Distributionally Robust Optimization Daniel Levy, Yair Carmon, John C. Duchi, Aaron Sidford
ALT 2020 Leverage Score Sampling for Faster Accelerated Regression and ERM Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin-Tat Lee, Praneeth Netrapalli, Aaron Sidford
COLT 2020 Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond Oliver Hinder, Aaron Sidford, Nimit Sohoni
AISTATS 2020 Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity Aaron Sidford, Mengdi Wang, Lin Yang, Yinyu Ye
NeurIPS 2019 A Direct tilde{O}(1/epsilon) Iteration Parallel Algorithm for Optimal Transport Arun Jambulapati, Aaron Sidford, Kevin Tian
NeurIPS 2019 A General Framework for Symmetric Property Estimation Moses Charikar, Kirankumar Shiragur, Aaron Sidford
NeurIPS 2019 Complexity of Highly Parallel Non-Smooth Convex Optimization Sebastien Bubeck, Qijia Jiang, Yin-Tat Lee, Yuanzhi Li, Aaron Sidford
COLT 2019 Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-Th Derivatives Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
COLT 2019 Near-Optimal Method for Highly Smooth Convex Optimization Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
NeurIPS 2019 Principal Component Projection and Regression in Nearly Linear Time Through Asymmetric SVRG Yujia Jin, Aaron Sidford
NeurIPS 2019 Variance Reduction for Matrix Games Yair Carmon, Yujia Jin, Aaron Sidford, Kevin Tian
COLT 2018 Accelerating Stochastic Gradient Descent for Least Squares Regression Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford
COLT 2018 Efficient Convex Optimization with Membership Oracles Yin Tat Lee, Aaron Sidford, Santosh S. Vempala
NeurIPS 2018 Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression Neha Gupta, Aaron Sidford
NeurIPS 2018 Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model Aaron Sidford, Mengdi Wang, Xian Wu, Lin Yang, Yinyu Ye
ICML 2017 “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford
ICML 2016 Efficient Algorithms for Large-Scale Generalized Eigenvector Computation and Canonical Correlation Analysis Rong Ge, Chi Jin, Sham, Praneeth Netrapalli, Aaron Sidford
ICML 2016 Faster Eigenvector Computation via Shift-and-Invert Preconditioning Dan Garber, Elad Hazan, Chi Jin, Sham, Cameron Musco, Praneeth Netrapalli, Aaron Sidford
ICML 2016 Principal Component Projection Without Principal Component Analysis Roy Frostig, Cameron Musco, Christopher Musco, Aaron Sidford
COLT 2016 Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
COLT 2015 Competing with the Empirical Risk Minimizer in a Single Pass Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford
ICML 2015 Un-Regularizing: Approximate Proximal Point and Faster Stochastic Algorithms for Empirical Risk Minimization Roy Frostig, Rong Ge, Sham Kakade, Aaron Sidford