Fang, Gongfan

23 publications

CVPR 2025 Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient Zigeng Chen, Xinyin Ma, Gongfan Fang, Xinchao Wang
CVPR 2025 Diffusion Model Is Effectively Its Own Teacher Xinyin Ma, Runpeng Yu, Songhua Liu, Gongfan Fang, Xinchao Wang
TMLR 2025 Efficient Reasoning Models: A Survey Sicheng Feng, Gongfan Fang, Xinyin Ma, Xinchao Wang
CVPR 2025 PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning Song Wang, Xiaolu Liu, Lingdong Kong, Jianyun Xu, Chunyong Hu, Gongfan Fang, Wentong Li, Jianke Zhu, Xinchao Wang
NeurIPS 2025 Thinkless: LLM Learns When to Think Gongfan Fang, Xinyin Ma, Xinchao Wang
CVPR 2025 TinyFusion: Diffusion Transformers Learned Shallow Gongfan Fang, Kunjun Li, Xinyin Ma, Xinchao Wang
NeurIPS 2025 VeriThinker: Learning to Verify Makes Reasoning Model Efficient Zigeng Chen, Xinyin Ma, Gongfan Fang, Ruonan Yu, Xinchao Wang
NeurIPS 2025 dKV-Cache: The Cache for Diffusion Language Models Xinyin Ma, Runpeng Yu, Gongfan Fang, Xinchao Wang
NeurIPS 2024 AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising Zigeng Chen, Xinyin Ma, Gongfan Fang, Zhenxiong Tan, Xinchao Wang
CVPR 2024 DeepCache: Accelerating Diffusion Models for Free Xinyin Ma, Gongfan Fang, Xinchao Wang
ECCV 2024 Isomorphic Pruning for Vision Models Gongfan Fang, Xinyin Ma, Michael Bi Mi, Xinchao Wang
NeurIPS 2024 Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching Xinyin Ma, Gongfan Fang, Michael Bi Mi, Xinchao Wang
NeurIPS 2024 MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models Gongfan Fang, Hongxu Yin, Saurav Muralidharan, Greg Heinrich, Jeff Pool, Jan Kautz, Pavlo Molchanov, Xinchao Wang
NeurIPS 2024 Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising Gongfan Fang, Xinyin Ma, Xinchao Wang
NeurIPS 2024 SlimSAM: 0.1% Data Makes Segment Anything Slim Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang
CVPR 2023 DepGraph: Towards Any Structural Pruning Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang
NeurIPS 2023 LLM-Pruner: On the Structural Pruning of Large Language Models Xinyin Ma, Gongfan Fang, Xinchao Wang
NeurIPS 2023 Structural Pruning for Diffusion Models Gongfan Fang, Xinyin Ma, Xinchao Wang
IJCAI 2022 Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt Xinyin Ma, Xinchao Wang, Gongfan Fang, Yongliang Shen, Weiming Lu
AAAI 2022 Up to 100x Faster Data-Free Knowledge Distillation Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, Mingli Song
IJCAI 2021 Contrastive Model Invertion for Data-Free Knolwedge Distillation Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song
NeurIPS 2021 Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data Gongfan Fang, Yifan Bao, Jie Song, Xinchao Wang, Donglin Xie, Chengchao Shen, Mingli Song
IJCAI 2019 Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning Sihui Luo, Xinchao Wang, Gongfan Fang, Yao Hu, Dapeng Tao, Mingli Song