Mokhtari, Aryan

63 publications

NeurIPS 2025 Affine-Invariant Global Non-Asymptotic Convergence Analysis of BFGS Under Self-Concordance Qiujiang Jin, Aryan Mokhtari
ICML 2025 Learning Mixtures of Experts with EM: A Mirror Descent Perspective Quentin Fruytier, Aryan Mokhtari, Sujay Sanghavi
NeurIPS 2025 Machine Unlearning Under Overparameterization Jacob L. Block, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2025 On the Complexity of Finding Stationary Points in Nonconvex Simple Bilevel Optimization Jincheng Cao, Ruichen Jiang, Erfan Yazdandoost Hamedani, Aryan Mokhtari
ICLR 2025 On the Crucial Role of Initialization for Matrix Factorization Bingcong Li, Liang Zhang, Aryan Mokhtari, Niao He
COLT 2025 Provable Complexity Improvement of AdaGrad over SGD: Upper and Lower Bounds in Stochastic Non-Convex Optimization Ruichen Jiang, Devyani Maladkar, Aryan Mokhtari
NeurIPS 2025 Provable Meta-Learning with Low-Rank Adaptations Jacob L. Block, Sundararajan Srinivasan, Liam Collins, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2024 Adaptive and Optimal Second-Order Optimistic Methods for Minimax Optimization Ruichen Jiang, Ali Kavis, Qiujiang Jin, Sujay Sanghavi, Aryan Mokhtari
NeurIPS 2024 An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization Jincheng Cao, Ruichen Jiang, Erfan Yazdandoost Hamedani, Aryan Mokhtari
NeurIPS 2024 In-Context Learning with Transformers: SoftMax Attention Adapts to Function Lipschitzness Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai
AISTATS 2024 Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate Ruichen Jiang, Parameswaran Raman, Shoham Sabach, Aryan Mokhtari, Mingyi Hong, Volkan Cevher
NeurIPS 2024 Non-Asymptotic Global Convergence Analysis of BFGS with the Armijo-Wolfe Line Search Qiujiang Jin, Ruichen Jiang, Aryan Mokhtari
NeurIPSW 2024 On the Crucial Role of Initialization for Matrix Factorization Bingcong Li, Liang Zhang, Aryan Mokhtari, Niao He
ICML 2024 Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
TMLR 2024 Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models Qiujiang Jin, Tongzheng Ren, Nhat Ho, Aryan Mokhtari
NeurIPS 2024 Stochastic Newton Proximal Extragradient Method Ruichen Jiang, Michał Dereziński, Aryan Mokhtari
AISTATS 2023 A Conditional Gradient-Based Method for Simple Bilevel Optimization with Convex Lower-Level Problem Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani
NeurIPS 2023 Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization Ruichen Jiang, Aryan Mokhtari
NeurIPS 2023 Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing Nived Rajaraman, Fnu Devvrit, Aryan Mokhtari, Kannan Ramchandran
COLT 2023 InfoNCE Loss Provably Learns Cluster-Preserving Representations Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai
COLT 2023 Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari
NeurIPS 2023 Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-Level Problem Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari
TMLR 2023 Straggler-Resilient Personalized Federated Learning Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari
AISTATS 2022 Minimax Optimization: The Case of Convex-Submodular Arman Adibi, Aryan Mokhtari, Hamed Hassani
NeurIPSW 2022 Conditional Gradient-Based Method for Bilevel Optimization with Convex Lower-Level Problem Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, Erfan Yazdandoost Hamedani
NeurIPS 2022 FedAvg with Fine Tuning: Local Updates Lead to Representation Learning Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
UAI 2022 Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
CoLLAs 2022 How Does the Task Landscape Affect MAML Performance? Liam Collins, Aryan Mokhtari, Sanjay Shakkottai
ICML 2022 MAML and ANIL Provably Learn Representations Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai
ICML 2022 Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood Qiujiang Jin, Alec Koppel, Ketan Rajawat, Aryan Mokhtari
COLT 2022 The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward
AISTATS 2021 Federated Learning with Compression: Unified Analysis and Sharp Guarantees Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi
NeurIPS 2021 Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach Qiujiang Jin, Aryan Mokhtari
ICML 2021 Exploiting Shared Representations for Personalized Federated Learning Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2021 Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
NeurIPS 2021 On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar
JMLR 2020 A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning Aryan Mokhtari, Alec Koppel, Martin Takac, Alejandro Ribeiro
AISTATS 2020 A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil
AISTATS 2020 DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gurbuzbalaban
AISTATS 2020 Efficient Distributed Hessian Free Algorithm for Large-Scale Empirical Risk Minimization via Accumulating Sample Strategy Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takac
AISTATS 2020 FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani
AISTATS 2020 On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
AISTATS 2020 One Sample Stochastic Frank-Wolfe Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2020 Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
ICML 2020 Quantized Decentralized Stochastic Learning over Directed Graphs Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
AISTATS 2020 Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2020 Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari
JMLR 2020 Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2020 Submodular Meta-Learning Arman Adibi, Aryan Mokhtari, Hamed Hassani
NeurIPS 2020 Task-Robust Model-Agnostic Meta-Learning Liam Collins, Aryan Mokhtari, Sanjay Shakkottai
AISTATS 2019 Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods Aryan Mokhtari, Asuman Ozdaglar, Ali Jadbabaie
NeurIPS 2019 Robust and Communication-Efficient Collaborative Learning Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
NeurIPS 2019 Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen
AISTATS 2018 Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap Aryan Mokhtari, Hamed Hassani, Amin Karbasi
ICML 2018 Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2018 Direct Runge-Kutta Discretization Achieves Acceleration Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie
NeurIPS 2018 Escaping Saddle Points in Constrained Optimization Aryan Mokhtari, Asuman Ozdaglar, Ali Jadbabaie
AISTATS 2018 Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro
ICML 2018 Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian
NeurIPS 2017 First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization Aryan Mokhtari, Alejandro Ribeiro
NeurIPS 2016 Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy Aryan Mokhtari, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann, Alejandro Ribeiro
JMLR 2016 DSA: Decentralized Double Stochastic Averaging Gradient Algorithm Aryan Mokhtari, Alejandro Ribeiro
JMLR 2015 Global Convergence of Online Limited Memory BFGS Aryan Mokhtari, Alejandro Ribeiro