Lowy, Andrew

14 publications

NeurIPS 2025 Differentially Private Bilevel Optimization: Efficient Algorithms with Near-Optimal Rates Andrew Lowy, Daogao Liu
ICMLW 2024 Efficient Differentially Private Fine-Tuning of Diffusion Models Jing Liu, Andrew Lowy, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
NeurIPS 2024 Faster Algorithms for User-Level Private Stochastic Convex Optimization Andrew Lowy, Daogao Liu, Hilal Asi
ICML 2024 How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization Andrew Lowy, Jonathan Ullman, Stephen Wright
ICML 2024 Optimal Differentially Private Model Training with Public Data Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn
ICML 2024 Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses Changyu Gao, Andrew Lowy, Xingyu Zhou, Stephen Wright
NeurIPSW 2023 Exploring User-Level Gradient Inversion with a Diffusion Prior Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Bradley A. Malin, Kieran Parsons, Ye Wang
ICLR 2023 Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses Andrew Lowy, Meisam Razaviyayn
AISTATS 2023 Private Non-Convex Federated Learning Without a Trusted Server Andrew Lowy, Ali Ghafelebashi, Meisam Razaviyayn
ALT 2023 Private Stochastic Optimization with Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses Andrew Lowy, Meisam Razaviyayn
ICLR 2023 Stochastic Differentially Private and Fair Learning Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
TMLR 2022 A Stochastic Optimization Framework for Fair Risk Minimization Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, Ahmad Beirami
NeurIPSW 2022 A Stochastic Optimization Framework for Fair Risk Minimization Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, Ahmad Beirami
NeurIPSW 2022 Private Stochastic Optimization with Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses & Extension to Non-Convex Andrew Lowy, Meisam Razaviyayn