Zhou, Yufa

10 publications

ICLR 2025 Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, Yufa Zhou
NeurIPS 2025 Efficient Multi-Modal Large Language Models via Progressive Consistency Distillation Zichen Wen, Shaobo Wang, Yufa Zhou, Junyuan Zhang, Qintong Zhang, Yifeng Gao, Zhaorun Chen, Bin Wang, Weijia Li, Conghui He, Linfeng Zhang
AAAI 2025 LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers Xuan Shen, Zhao Song, Yufa Zhou, Bo Chen, Yanyu Li, Yifan Gong, Kai Zhang, Hao Tan, Jason Kuen, Henghui Ding, Zhihao Shu, Wei Niu, Pu Zhao, Yanzhi Wang, Jiuxiang Gu
AISTATS 2025 Looped ReLU MLPs May Be All You Need as Practical Programmable Computers Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, Yufa Zhou
AAAI 2025 Numerical Pruning for Efficient Autoregressive Models Xuan Shen, Zhao Song, Yufa Zhou, Bo Chen, Jing Liu, Ruiyi Zhang, Ryan A. Rossi, Hao Tan, Tong Yu, Xiang Chen, Yufan Zhou, Tong Sun, Pu Zhao, Yanzhi Wang, Jiuxiang Gu
ICCV 2025 Unraveling the Smoothness Properties of Diffusion Models: A Gaussian Mixture Perspective Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, Mingda Wan, Yufa Zhou
NeurIPSW 2024 Differential Privacy of Cross-Attention with Provable Guarantee Yingyu Liang, Zhenmei Shi, Zhao Song, Yufa Zhou
NeurIPSW 2024 Differentially Private Attention Computation Yeqi Gao, Zhao Song, Xin Yang, Yufa Zhou
NeurIPSW 2024 Multi-Layer Transformers Gradient Can Be Approximated in Almost Linear Time Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, Yufa Zhou
NeurIPSW 2024 Tensor Attention Training: Provably Efficient Learning of Higher-Order Transformers Yingyu Liang, Zhenmei Shi, Zhao Song, Yufa Zhou