Ozdaglar, Asuman

24 publications

AISTATS 2024 EM for Mixture of Linear Regression with Clustered Data Amirhossein Reisizadeh, Khashayar Gatmiry, Asuman Ozdaglar
NeurIPS 2023 A Finite-Sample Analysis of Payoff-Based Independent Learning in Zero-Sum Stochastic Games Zaiwei Chen, Kaiqing Zhang, Eric Mazumdar, Asuman Ozdaglar, Adam Wierman
NeurIPS 2023 Multi-Player Zero-Sum Markov Games with Networked Separable Interactions Chanwoo Park, Kaiqing Zhang, Asuman Ozdaglar
AISTATS 2023 Symmetric (Optimistic) Natural Policy Gradient for Multi-Agent Learning with Parameter Convergence Sarath Pattathil, Kaiqing Zhang, Asuman Ozdaglar
NeurIPS 2023 Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value Jaeyeon Kim, Asuman Ozdaglar, Chanwoo Park, Ernest Ryu
NeurIPS 2022 Bridging Central and Local Differential Privacy in Data Acquisition Mechanisms Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian, Asuman Ozdaglar
JMLR 2022 Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks Alireza Fallah, Mert Gürbüzbalaban, Asuman Ozdaglar, Umut Şimşekli, Lingjiong Zhu
NeurIPS 2022 What Is a Good Metric to Study Generalization of Minimax Learners? Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang
ICML 2021 A Wasserstein Minimax Framework for Mixed Linear Regression Theo Diamandis, Yonina Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar
NeurIPS 2021 Decentralized Q-Learning in Zero-Sum Markov Games Muhammed Sayin, Kaiqing Zhang, David Leslie, Tamer Basar, Asuman Ozdaglar
NeurIPS 2021 Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
NeurIPS 2021 On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar
ICML 2021 Train Simultaneously, Generalize Better: Stability of Gradient-Based Minimax Learners Farzan Farnia, Asuman Ozdaglar
AISTATS 2020 A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil
L4DC 2020 Bayesian Learning with Adaptive Load Allocation Strategies Manxi Wu, Saurabh Amin, Asuman Ozdaglar
ICML 2020 Do GANs Always Have Nash Equilibria? Farzan Farnia, Asuman Ozdaglar
COLT 2020 Last Iterate Is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar
AISTATS 2020 On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
NeurIPS 2020 Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
NeurIPS 2019 A Universally Optimal Multistage Accelerated Stochastic Gradient Method Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar
AISTATS 2019 Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods Aryan Mokhtari, Asuman Ozdaglar, Ali Jadbabaie
NeurIPS 2018 Escaping Saddle Points in Constrained Optimization Aryan Mokhtari, Asuman Ozdaglar, Ali Jadbabaie
NeurIPS 2017 When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent Mert Gurbuzbalaban, Asuman Ozdaglar, Pablo A Parrilo, Nuri Vanli
NeurIPS 2013 Computing the Stationary Distribution Locally Christina E. Lee, Asuman Ozdaglar, Devavrat Shah