Lee, Holden

28 publications

COLT 2025 Efficiently Learning and Sampling Multimodal Distributions with Data-Based Initialization Frederic Koehler, Holden Lee, Thuy-Duong Vuong
COLT 2025 Learning Mixtures of Gaussians Using Diffusion Models Khashayar Gatmiry, Jonathan Kelner, Holden Lee
ICML 2024 How Flawed Is ECE? an Analysis via Logit Smoothing Muthu Chidambaram, Holden Lee, Colin Mcswiggen, Semon Rezchikov
ICML 2024 Principled Gradient-Based MCMC for Conditional Sampling of Text Li Du, Afra Amini, Lucas Torroba Hennigen, Xinyan Velocity Yu, Holden Lee, Jason Eisner, Ryan Cotterell
NeurIPS 2024 What Does Guidance Do? a Fine-Grained Analysis in a Simple Setting Muthu Chidambaram, Khashayar Gatmiry, Sitan Chen, Holden Lee, Jianfeng Lu
NeurIPS 2023 Connecting Pre-Trained Language Model and Downstream Task via Properties of Representation Chenwei Wu, Holden Lee, Rong Ge
ALT 2023 Convergence of Score-Based Generative Modeling for General Data Distributions Holden Lee, Jianfeng Lu, Yixin Tan
ALT 2023 Fisher Information Lower Bounds for Sampling Sinho Chewi, Patrik Gerber, Holden Lee, Chen Lu
ICML 2023 Improved Analysis of Score-Based Generative Modeling: User-Friendly Bounds Under Minimal Smoothness Assumptions Hongrui Chen, Holden Lee, Jianfeng Lu
ICLR 2023 Pitfalls of Gaussians as a Noise Distribution in NCE Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
NeurIPS 2023 Provable Benefits of Score Matching Chirag Pabbaraju, Dhruv Rohatgi, Anish Prasad Sevekari, Holden Lee, Ankur Moitra, Andrej Risteski
ICMLW 2023 Provable Benefits of Score Matching Chirag Pabbaraju, Dhruv Rohatgi, Anish Sevekari, Holden Lee, Ankur Moitra, Andrej Risteski
NeurIPS 2023 The Probability Flow ODE Is Provably Fast Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, Adil Salim
NeurIPS 2022 Convergence for Score-Based Generative Modeling with Polynomial Complexity Holden Lee, Jianfeng Lu, Yixin Tan
NeurIPSW 2022 Convergence of Score-Based Generative Modeling for General Data Distributions Holden Lee, Jianfeng Lu, Yixin Tan
ICML 2022 Extracting Latent State Representations with Linear Dynamics from Rich Observations Abraham Frandsen, Rong Ge, Holden Lee
ALT 2022 Improved Rates for Prediction and Identification of Partially Observed Linear Dynamical Systems Holden Lee
COLT 2022 Sampling Approximately Low-Rank Ising Models: MCMC Meets Variational Methods Frederic Koehler, Holden Lee, Andrej Risteski
ALT 2021 Efficient Sampling from the Bingham Distribution Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski
NeurIPS 2021 Universal Approximation Using Well-Conditioned Normalizing Flows Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
ICMLW 2021 Universal Approximation for Log-Concave Distributions Using Well-Conditioned Normalizing Flows Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski
COLT 2020 No-Regret Prediction in Marginally Stable Systems Udaya Ghai, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang
ALT 2020 Robust Guarantees for Learning an Autoregressive Filter Holden Lee, Cyril Zhang
NeurIPS 2019 Explaining Landscape Connectivity of Low-Cost Solutions for Multilayer Nets Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Rong Ge, Sanjeev Arora
NeurIPS 2019 Online Sampling from Log-Concave Distributions Holden Lee, Oren Mangoubi, Nisheeth Vishnoi
NeurIPS 2018 Beyond Log-Concavity: Provable Guarantees for Sampling Multi-Modal Distributions Using Simulated Tempering Langevin Monte Carlo Holden Lee, Andrej Risteski, Rong Ge
NeurIPS 2018 Spectral Filtering for General Linear Dynamical Systems Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang
COLT 2017 On the Ability of Neural Nets to Express Distributions Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora