Tang, Zhenheng
18 publications
ICML
2025
Can Compressed LLMs Truly Act? an Empirical Evaluation of Agentic Capabilities in LLM Compression
NeurIPS
2025
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference
ICLR
2025
Hot-Pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection
NeurIPS
2024
FuseFL: One-Shot Federated Learning Through the Lens of Causality with Progressive Model Fusion