HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration
Abstract
Diffusion Transformers (DiTs) excel in generative tasks but face practical deployment challenges due to high inference costs. Feature caching, which stores and retrieves redundant computations, offers the potential for acceleration. Existing learning-based caching, though adaptive, overlooks the impact of the prior timestep. It also suffers from misaligned objectives-aligned predicted noise vs. high-quality images-between training and inference. These two discrepancies compromise both performance and efficiency. To this end, we harmonize training and inference with a novel learning-based caching framework dubbed HarmoniCa. It first incorporates Step-Wise Denoising Training (SDT) to ensure the continuity of the denoising process, where prior steps can be leveraged. In addition, an Image Error Proxy-Guided Objective (IEPO) is applied to balance image quality against cache utilization through an efficient proxy to approximate the image error. Extensive experiments across $8$ models, $4$ samplers, and resolutions from $256\times256$ to $2K$ demonstrate superior performance and speedup of our framework. For instance, it achieves over $40\%$ latency reduction (*i.e.*, $2.07\times$ theoretical speedup) and improved performance on PixArt-$\alpha$. Remarkably, our *image-free* approach reduces training time by $25\%$ compared with the previous method. Our code is available at https://github.com/ModelTC/HarmoniCa.
Cite
Text
Huang et al. "HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Huang et al. "HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/huang2025icml-harmonica/)BibTeX
@inproceedings{huang2025icml-harmonica,
title = {{HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration}},
author = {Huang, Yushi and Wang, Zining and Gong, Ruihao and Liu, Jing and Zhang, Xinjie and Guo, Jinyang and Liu, Xianglong and Zhang, Jun},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {25835-25858},
volume = {267},
url = {https://mlanthology.org/icml/2025/huang2025icml-harmonica/}
}