The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment

Abstract

Increased focus on the efficiency of machine learning systems has led to rapid improvements in hardware accelerator performance and model efficiency. However, the resulting increases in computational throughput and reductions in floating point operations have not directly translated to improvements in wall-clock inference latency. We demonstrate that these discrepancies can be largely attributed to bottlenecks introduced by deep learning frameworks. We denote this phenomena as the framework tax, and observe that the disparity is growing as hardware speed increases over time. In this work, we examine this phenomena through a series of case studies analyzing the effects of model design decisions, framework paradigms, and hardware platforms on total model latency. Based on our findings, we provide actionable recommendations to researchers and practitioners aimed at narrowing the gap between efficient ML model research and practice.

Cite

Text

Fernandez et al. "The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment." ICML 2023 Workshops: ES-FoMO, 2023.

Markdown

[Fernandez et al. "The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment." ICML 2023 Workshops: ES-FoMO, 2023.](https://mlanthology.org/icmlw/2023/fernandez2023icmlw-framework/)

BibTeX

@inproceedings{fernandez2023icmlw-framework,
  title     = {{The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment}},
  author    = {Fernandez, Jared and Kahn, Jacob and Na, Clara and Bisk, Yonatan and Strubell, Emma},
  booktitle = {ICML 2023 Workshops: ES-FoMO},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/fernandez2023icmlw-framework/}
}