Lee, Yin-Tat

37 publications

TMLR 2025 Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training Hiroki Naganuma, Xinzhi Zhang, Man-Chung Yue, Ioannis Mitliagkas, Russell J. Hewett, Philipp Andre Witte, Yin Tat Lee
ICML 2024 Differentially Private Synthetic Data via Foundation Model APIs 2: Text Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin
ICLRW 2024 Differentially Private Synthetic Data via Foundation Model APIs 2: Text Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin
ICLRW 2024 MathChat: Converse to Tackle Challenging Math Problems with LLM Agents Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang
NeurIPSW 2024 Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training Hiroki Naganuma, Xinzhi Zhang, Man-Chung Yue, Ioannis Mitliagkas, Russell J. Hewett, Philipp Andre Witte, Yin Tat Lee
COLT 2023 Algorithmic Aspects of the Log-Laplace Transform and a Non-Euclidean Proximal Sampler Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian
COLT 2023 Condition-Number-Independent Convergence Rate of Riemannian Hamiltonian Monte Carlo with Numerical Integrators Yunbum Kook, Yin Tat Lee, Ruoqi Shen, Santosh Vempala
ICLR 2023 Exploring the Limits of Differentially Private Deep Learning with Group-Wise Clipping Jiyan He, Xuechen Li, Da Yu, Huishuai Zhang, Janardhan Kulkarni, Yin Tat Lee, Arturs Backurs, Nenghai Yu, Jiang Bian
NeurIPS 2023 Learning Threshold Neurons via Edge of Stability Kwangjun Ahn, Sebastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, Yi Zhang
NeurIPS 2022 A Gradient Sampling Method with Complexity Guarantees for Lipschitz Functions in High and Low Dimensions Damek Davis, Dmitriy Drusvyatskiy, Yin Tat Lee, Swati Padmanabhan, Guanghao Ye
NeurIPS 2022 Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity Sally Dong, Haotian Jiang, Yin Tat Lee, Swati Padmanabhan, Guanghao Ye
ICLR 2022 Differentially Private Fine-Tuning of Language Models Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang
COLT 2022 Private Convex Optimization via Exponential Mechanism Sivakanth Gopi, Yin Tat Lee, Daogao Liu
NeurIPS 2022 Sampling with Riemannian Hamiltonian Monte Carlo in a Constrained Space Yunbum Kook, Yin-Tat Lee, Ruoqi Shen, Santosh Vempala
NeurIPS 2022 When Does Differentially Private Learning Not Suffer in High Dimensions? Xuechen Li, Daogao Liu, Tatsunori B Hashimoto, Huseyin A. Inan, Janardhan Kulkarni, Yin-Tat Lee, Abhradeep Guha Thakurta
NeurIPS 2021 Fast and Memory Efficient Differentially Private-SGD via JL Projections Zhiqi Bu, Sivakanth Gopi, Janardhan Kulkarni, Yin Tat Lee, Hanwen Shen, Uthaipon Tantipongpipat
NeurIPS 2021 Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions Yin Tat Lee, Ruoqi Shen, Kevin Tian
NeurIPS 2021 Numerical Composition of Differential Privacy Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz
NeurIPS 2021 Private Non-Smooth ERM and SCO in Subquadratic Steps Janardhan Kulkarni, Yin Tat Lee, Daogao Liu
COLT 2021 Structured Logconcave Sampling with a Restricted Gaussian Oracle Yin Tat Lee, Ruoqi Shen, Kevin Tian
NeurIPS 2020 Acceleration with a Ball Optimization Oracle Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, Aaron Sidford, Kevin Tian
COLT 2020 An $\widetilde\mathcal{O}(m/\varepsilon^3.5)$-Cost Algorithm for Semidefinite Programs with Diagonal Constraints Yin Tat Lee, Swati Padmanabhan
ALT 2020 Leverage Score Sampling for Faster Accelerated Regression and ERM Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin-Tat Lee, Praneeth Netrapalli, Aaron Sidford
COLT 2020 Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo Yin Tat Lee, Ruoqi Shen, Kevin Tian
NeurIPS 2020 Network Size and Size of the Weights in Memorization with Two-Layers Neural Networks Sebastien Bubeck, Ronen Eldan, Yin Tat Lee, Dan Mikulincer
COLT 2019 A Near-Optimal Algorithm for Approximating the John Ellipsoid Michael B. Cohen, Ben Cousins, Yin Tat Lee, Xin Yang
ICML 2019 Adversarial Examples from Computational Constraints Sebastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn
NeurIPS 2019 Complexity of Highly Parallel Non-Smooth Convex Optimization Sebastien Bubeck, Qijia Jiang, Yin-Tat Lee, Yuanzhi Li, Aaron Sidford
COLT 2019 Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-Th Derivatives Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
COLT 2019 Near-Optimal Method for Highly Smooth Convex Optimization Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
JMLR 2019 Optimal Convergence Rates for Convex Distributed Optimization in Networks Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié
COLT 2019 Solving Empirical Risk Minimization in the Current Matrix Multiplication Time Yin Tat Lee, Zhao Song, Qiuyi Zhang
NeurIPS 2019 The Randomized Midpoint Method for Log-Concave Sampling Ruoqi Shen, Yin Tat Lee
COLT 2018 Efficient Convex Optimization with Membership Oracles Yin Tat Lee, Aaron Sidford, Santosh S. Vempala
NeurIPS 2018 Optimal Algorithms for Non-Smooth Distributed Optimization in Networks Kevin Scaman, Francis Bach, Sebastien Bubeck, Laurent Massoulié, Yin Tat Lee
ICML 2017 Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié
ICML 2016 Black-Box Optimization with a Politician Sebastien Bubeck, Yin Tat Lee