Auto-Reconfiguration for Latency Minimization in CPU-Based DNN Serving
Abstract
In this paper, we investigate how to push the performance limits of serving Deep Neural Network (DNN) models on CPU-based servers. Specifically, we observe that while intra-operator parallelism across multiple threads is an effective way to reduce inference latency, it provides diminishing returns. Our primary insight is that instead of running a single instance of a model with all available threads on a server, running multiple instances each with smaller batch sizes and fewer threads for intra-op parallelism can provide lower inference latency. However, the right configuration is hard to determine manually since it is workload- (DNN model and batch size used by the serving system) and deployment-dependent (number of CPU cores on server). We present Packrat, a new serving system for online inference that given a model and batch size ($B$) algorithmically picks the optimal number of instances ($i$), the number of threads each should be allocated ($t$), and the batch sizes each should operate on ($b$) that minimizes latency. Packrat is built as an extension to TorchServe and supports online reconfigurations to avoid serving downtime. Averaged across a range of batch sizes, Packrat improves inference latency by 1.43$\times$ to 1.83$\times$ on a range of commonly used DNNs.
Cite
Text
Bhardwaj et al. "Auto-Reconfiguration for Latency Minimization in CPU-Based DNN Serving." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Bhardwaj et al. "Auto-Reconfiguration for Latency Minimization in CPU-Based DNN Serving." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/bhardwaj2025icml-autoreconfiguration/)BibTeX
@inproceedings{bhardwaj2025icml-autoreconfiguration,
title = {{Auto-Reconfiguration for Latency Minimization in CPU-Based DNN Serving}},
author = {Bhardwaj, Ankit and Phanishayee, Amar and Narayanan, Deepak and Stutsman, Ryan},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {4115-4129},
volume = {267},
url = {https://mlanthology.org/icml/2025/bhardwaj2025icml-autoreconfiguration/}
}