Tao, Molei

34 publications

NeurIPS 2025 A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective Lianghe Shi, Meng Wu, Huijie Zhang, Zekai Zhang, Molei Tao, Qing Qu
ICLRW 2025 Complexity Analysis of Normalizing Constant Estimation: From Jarzynski Equality to Annealed Importance Sampling and Beyond Wei Guo, Molei Tao, Yongxin Chen
ICML 2025 Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces Kevin Rojas, Yuchen Zhu, Sichen Zhu, Felix X-F. Ye, Molei Tao
ICLR 2025 Diffusion Generative Modeling for Spatially Resolved Gene Expression Inference from Histology Images Sichen Zhu, Yuchen Zhu, Molei Tao, Peng Qiu
NeurIPS 2025 Fast Non-Log-Concave Sampling Under Nonconvex Equality and Inequality Constraints with Landing Kijung Jeon, Michael Muehlebach, Molei Tao
NeurIPS 2025 Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms Yinuo Ren, Haoxuan Chen, Yuchen Zhu, Wei Guo, Yongxin Chen, Grant M. Rotskoff, Molei Tao, Lexing Ying
ICLRW 2025 Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms Yinuo Ren, Haoxuan Chen, Yuchen Zhu, Wei Guo, Yongxin Chen, Grant M. Rotskoff, Molei Tao, Lexing Ying
NeurIPS 2025 MDNS: Masked Diffusion Neural Sampler via Stochastic Optimal Control Yuchen Zhu, Wei Guo, Jaemoo Choi, Guan-Horng Liu, Yongxin Chen, Molei Tao
NeurIPS 2025 Non-Equilibrium Annealed Adjoint Sampler Jaemoo Choi, Yongxin Chen, Molei Tao, Guan-Horng Liu
ICLR 2025 Provable Benefit of Annealed Langevin Monte Carlo for Non-Log-Concave Sampling Wei Guo, Molei Tao, Yongxin Chen
WACV 2025 SODA: Spectral Orthogonal Decomposition Adaptation for Diffusion Models Xinxi Zhang, Song Wen, Ligong Han, Felix Juefei-Xu, Akash Srivastava, Junzhou Huang, Vladimir Pavlovic, Hao Wang, Molei Tao, Dimitris Metaxas
ICLR 2025 Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups Yuchen Zhu, Tianrong Chen, Lingkai Kong, Evangelos Theodorou, Molei Tao
NeurIPS 2025 Variational Learning Finds Flatter Solutions at the Edge of Stability Avrajit Ghosh, Bai Cong, Rio Yokota, Saiprasad Ravishankar, Rongrong Wang, Molei Tao, Mohammad Emtiyaz Khan, Thomas Möllenhoff
AISTATS 2025 Variational Schrödinger Momentum Diffusion Kevin Rojas, Yixin Tan, Molei Tao, Yuriy Nevmyvaka, Wei Deng
COLT 2024 Convergence of Kinetic Langevin Monte Carlo on Lie Groups Lingkai Kong, Molei Tao
NeurIPS 2024 Evaluating the Design Space of Diffusion-Based Generative Models Yuqing Wang, Ye He, Molei Tao
AISTATS 2024 Extragradient Type Methods for Riemannian Variational Inequality Problems Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob D Abernethy, Molei Tao
NeurIPS 2024 Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks Zhenghao Xu, Yuqing Wang, Tuo Zhao, Rachel Ward, Molei Tao
NeurIPS 2024 Quantitative Convergences of Lie Group Momentum Optimizers Lingkai Kong, Molei Tao
NeurIPS 2024 Zeroth-Order Sampling Methods for Non-Log-Concave Distributions: Alleviating Metastability by Denoising Diffusion Ye He, Kevin Rojas, Molei Tao
NeurIPS 2023 Deep Momentum Multi-Marginal Schrödinger Bridge Tianrong Chen, Guan-Horng Liu, Molei Tao, Evangelos Theodorou
NeurIPSW 2023 Good Regularity Creates Large Learning Rate Implicit Biases: Edge of Stability, Balancing, and Catapult Yuqing Wang, Zhenghao Xu, Tuo Zhao, Molei Tao
NeurIPS 2023 Mirror Diffusion Models for Constrained and Watermarked Generation Guan-Horng Liu, Tianrong Chen, Evangelos Theodorou, Molei Tao
ICLR 2023 Momentum Stiefel Optimizer, with Applications to Suitably-Orthogonal Attention, and Optimal Transport Lingkai Kong, Yuqing Wang, Molei Tao
ICLR 2023 gDDIM: Generalized Denoising Diffusion Implicit Models Qinsheng Zhang, Molei Tao, Yongxin Chen
NeurIPS 2022 Alternating Mirror Descent for Constrained Min-Max Games Andre Wibisono, Molei Tao, Georgios Piliouras
ICML 2022 Hessian-Free High-Resolution Nesterov Acceleration for Sampling Ruilin Li, Hongyuan Zha, Molei Tao
ICLR 2022 Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect Yuqing Wang, Minshuo Chen, Tuo Zhao, Molei Tao
ICLR 2022 Sqrt(d) Dimension Dependence of Langevin Monte Carlo Ruilin Li, Hongyuan Zha, Molei Tao
ALT 2022 The Mirror Langevin Algorithm Converges with Vanishing Bias Ruilin Li, Molei Tao, Santosh S. Vempala, Andre Wibisono
ICML 2021 Data-Driven Prediction of General Hamiltonian Dynamics via Learning Exactly-Symplectic Maps Renyi Chen, Molei Tao
NeurIPS 2020 Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function Lingkai Kong, Molei Tao
AISTATS 2020 Variational Optimization on Lie Groups, with Examples of Leading (Generalized) Eigenvalue Problems Molei Tao, Tomoki Ohsawa
NeurIPS 2020 Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- a Neural Tangent Kernel Perspective Kaixuan Huang, Yuqing Wang, Molei Tao, Tuo Zhao