Mahdavi, Mehrdad

41 publications

TMLR 2025 Low-Rank Momentum Factorization for Memory Efficient Training Pouria Mahdavinia, Mehrdad Mahdavi
AISTATS 2025 Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees Yuyang Deng, Fuli Qiao, Mehrdad Mahdavi
NeurIPS 2024 Learn More, but Bother Less: Parameter Efficient Continual Learning Fuli Qiao, Mehrdad Mahdavi
AISTATS 2024 On the Generalization Ability of Unsupervised Pretraining Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi
ICML 2024 Stochastic Quantum Sampling for Non-Logconcave Distributions and Estimating Partition Functions Guneykan Ozgul, Xiantao Li, Mehrdad Mahdavi, Chunhao Wang
NeurIPS 2023 Distributed Personalized Empirical Risk Minimization Yuyang Deng, Mohammad Mahdi Kamani, Pouria Mahdavinia, Mehrdad Mahdavi
ICLR 2023 Do We Really Need Complicated Model Architectures for Temporal Networks? Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi
AISTATS 2023 Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection Weilin Cong, Mehrdad Mahdavi
NeurIPS 2023 Mixture Weight Estimation and Model Prediction in Multi-Source Multi-Target Domain Adaptation Yuyang Deng, Ilja Kuzborskij, Mehrdad Mahdavi
NeurIPS 2023 Understanding Deep Gradient Leakage via Inversion Influence Functions Haobo Zhang, Junyuan Hong, Yuyang Deng, Mehrdad Mahdavi, Jiayu Zhou
AISTATS 2022 Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
MLJ 2022 Efficient Fair Principal Component Analysis Mohammad Mahdi Kamani, Farzin Haddadpour, Rana Forsati, Mehrdad Mahdavi
ICLR 2022 Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut Kandemir, Anand Sivasubramaniam
ICLR 2022 Learning Distributionally Robust Models at Scale via Composite Optimization Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi
NeurIPS 2022 Tight Analysis of Extra-Gradient and Optimistic Gradient Methods for Nonconvex Minimax Problems Pouria Mahdavinia, Yuyang Deng, Haochuan Li, Mehrdad Mahdavi
AISTATS 2021 Federated Learning with Compression: Unified Analysis and Sharp Guarantees Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi
AISTATS 2021 Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency Yuyang Deng, Mehrdad Mahdavi
NeurIPS 2021 Meta-Learning with an Adaptive Task Scheduler Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
NeurIPS 2021 On Provable Benefits of Depth in Training Graph Convolutional Networks Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
NeurIPS 2020 Distributionally Robust Federated Averaging Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
NeurIPS 2020 GCN Meets GPU: Decoupling “When to Sample” from “How to Sample” Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir
NeurIPS 2020 Online Structured Meta-Learning Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui Li, Richard Socher, Caiming Xiong
NeurIPS 2019 Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Viveck Cadambe
ICML 2019 Trading Redundancy for Communication: Speeding up Distributed SGD for Non-Convex Optimization Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Viveck Cadambe
AISTATS 2017 Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-Dimensional Data Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nati Srebro
ICML 2016 Train and Test Tightness of LP Relaxations in Structured Prediction Ofer Meshi, Mehrdad Mahdavi, Adrian Weller, David Sontag
MLJ 2015 An Efficient Primal Dual Prox Method for Non-Smooth Optimization Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu
COLT 2015 Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization Mehrdad Mahdavi, Lijun Zhang, Rong Jin
NeurIPS 2015 Smooth and Strong: MAP Inference with Linear Convergence Ofer Meshi, Mehrdad Mahdavi, Alex Schwing
MLJ 2014 Regret Bounded by Gradual Variation for Online Convex Optimization Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu
NeurIPS 2013 Linear Convergence with Condition Number Independent Access of Full Gradients Lijun Zhang, Mehrdad Mahdavi, Rong Jin
NeurIPS 2013 Mixed Optimization for Smooth Functions Mehrdad Mahdavi, Lijun Zhang, Rong Jin
COLT 2013 Passive Learning with Target Risk Mehrdad Mahdavi, Rong Jin
COLT 2013 Recovering the Optimal Solution by Dual Random Projection Lijun Zhang, Mehrdad Mahdavi, Rong Jin, Tianbao Yang, Shenghuo Zhu
NeurIPS 2013 Stochastic Convex Optimization with Multiple Objectives Mehrdad Mahdavi, Tianbao Yang, Rong Jin
ICML 2012 Multiple Kernel Learning from Noisy Labels by Stochastic Programming Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Lijun Zhang, Yang Zhou
NeurIPS 2012 Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison Tianbao Yang, Yu-feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou
AAAI 2012 Online Kernel Selection: Algorithms and Evaluations Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Jinfeng Yi, Steven C. H. Hoi
COLT 2012 Online Optimization with Gradual Variations Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, Shenghuo Zhu
NeurIPS 2012 Stochastic Gradient Descent with Only One Projection Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, Jinfeng Yi
JMLR 2012 Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints Mehrdad Mahdavi, Rong Jin, Tianbao Yang