Fu, Yonggan

30 publications

ICCV 2025 Fewer Denoising Steps or Cheaper Per-Step Inference: Towards Compute-Optimal Diffusion Model Deployment Zhenbang Du, Yonggan Fu, Lifu Wang, Jiayi Qian, Xiao Luo, Yingyan Celine Lin
ICLR 2025 Hymba: A Hybrid-Head Architecture for Small Language Models Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van keirsbilck, Min-Hung Chen, Yoshi Suhara, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov
ICML 2025 LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models Dachuan Shi, Yonggan Fu, Xiangchi Yuan, Zhongzhi Yu, Haoran You, Sixu Li, Xin Dong, Jan Kautz, Pavlo Molchanov, Yingyan Celine Lin
ICLR 2025 LongMamba: Enhancing Mamba's Long-Context Capabilities via Training-Free Receptive Field Enlargement Zhifan Ye, Kejing Xia, Yonggan Fu, Xin Dong, Jihoon Hong, Xiangchi Yuan, Shizhe Diao, Jan Kautz, Pavlo Molchanov, Yingyan Celine Lin
NeurIPS 2025 Nemotron-CLIMB: Clustering-Based Iterative Data Mixture Bootstrapping for Language Model Pre-Training Shizhe Diao, Yu Yang, Yonggan Fu, Xin Dong, Dan Su, Markus Kliegl, Zijia Chen, Peter Belcak, Yoshi Suhara, Hongxu Yin, Mostofa Patwary, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov
NeurIPS 2025 Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models Yonggan Fu, Xin Dong, Shizhe Diao, Matthijs Van keirsbilck, Hanrong Ye, Wonmin Byeon, Yashaswi Karnati, Lucas Liebenwein, Maksim Khadkevich, Alexander Keller, Jan Kautz, Yingyan Celine Lin, Pavlo Molchanov
NeurIPS 2024 AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment Yonggan Fu, Zhongzhi Yu, Junwei Li, Jiayi Qian, Yongan Zhang, Xiangchi Yuan, Dachuan Shi, Roman Yakunin, Yingyan Lin
ECCV 2024 Omni-Recon: Harnessing Image-Based Rendering for General-Purpose Neural Radiance Fields Yonggan Fu, Huaizhi Qu, Zhifan Ye, Chaojian Li, Kevin Zhao, Yingyan Lin
NeurIPS 2024 Rad-NeRF: Ray-Decoupled Training of Neural Radiance Field Lidong Guo, Xuefei Ning, Yonggan Fu, Tianchen Zhao, Zhuoliang Kang, Jincheng Yu, Yingyan Lin, Yu Wang
ICML 2024 Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models Without Training Through Attention Calibration Zhongzhi Yu, Zheng Wang, Yonggan Fu, Huihong Shi, Khalid Shaikh, Yingyan Celine Lin
CVPR 2023 Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-Time Mobile Telepresence Yonggan Fu, Yuecheng Li, Chenghui Li, Jason Saragih, Peizhao Zhang, Xiaoliang Dai, Yingyan Lin
CVPR 2023 Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning Zhongzhi Yu, Shang Wu, Yonggan Fu, Shunyao Zhang, Yingyan Lin
ICML 2023 Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning Zhongzhi Yu, Yang Zhang, Kaizhi Qian, Cheng Wan, Yonggan Fu, Yongan Zhang, Yingyan Celine Lin
ICML 2023 NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields Against Adversarial Perturbations Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan Celine Lin
ICML 2022 DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, Yingyan Lin
AAAI 2022 Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets Haoran You, Zhihan Lu, Zijian Zhou, Yonggan Fu, Yingyan Lin
NeurIPS 2022 Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing Yonggan Fu, Yang Zhang, Kaizhi Qian, Zhifan Ye, Zhongzhi Yu, Cheng-I Jeff Lai, Celine Lin
AAAI 2022 MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation Zhongzhi Yu, Yonggan Fu, Sicheng Li, Chaojian Li, Yingyan Lin
ICLR 2022 Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin
ICML 2022 ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks Haoran You, Baopu Li, Shi Huihong, Yonggan Fu, Yingyan Lin
ICML 2021 Auto-NBA: Efficient and Effective Search over the Joint Space of Networks, Bitwidths, and Accelerators Yonggan Fu, Yongan Zhang, Yang Zhang, David Cox, Yingyan Lin
ICLR 2021 CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
ICML 2021 Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin
NeurIPS 2021 Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found Within Randomly Initialized Networks Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, Yingyan Lin
ICLR 2021 HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, Yingyan Lin
ICCV 2021 SACoD: Sensor Algorithm Co-Design Towards Efficient CNN-Powered Intelligent PhlatCam Yonggan Fu, Yang Zhang, Yue Wang, Zhihan Lu, Vivek Boominathan, Ashok Veeraraghavan, Yingyan Lin
ICML 2020 AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang
ICLR 2020 Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin
NeurIPS 2020 FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training Yonggan Fu, Haoran You, Yang Zhao, Yue Wang, Chaojian Li, Kailash Gopalakrishnan, Zhangyang Wang, Yingyan Lin
AAAI 2020 Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference Jianghao Shen, Yue Wang, Pengfei Xu, Yonggan Fu, Zhangyang Wang, Yingyan Lin