Mpcformer: Fast, Performant and Private Transformer Inference with Mpc

Abstract

Enabling private inference is crucial for many cloud inference services that are based on Transformer models. However, existing private inference solutions can increase the inference latency by more than 60$\times$ or significantly compromise the inference quality. In this paper, we design the framework MPCFORMER as a practical solution, using Secure Multi-Party Computation (MPC) and Knowledge Distillation (KD). Through extensive evaluations, we show that MPCFORMER significantly speeds up Transformer inference in MPC settings while achieving similar ML performance to the input model. On the IMDb dataset, it achieves similar performance to $\text{BERT}_\text{BASE}$, while being 5.3$\times$ faster. On the GLUE benchmark, it achieves 97% performance of $\text{BERT}_\text{BASE}$ with a 2.2$\times$ speedup. MPCFORMER remains effective with different trained Transformer weights such as $\text{ROBERTA}_\text{BASE}$ and larger models including $\text{BERT}_\text{LARGE}$. Code is available at https://github.com/MccRee177/MPCFormer.

Cite

Text

Li et al. "Mpcformer: Fast, Performant and Private Transformer Inference with Mpc." International Conference on Learning Representations, 2023.

Markdown

[Li et al. "Mpcformer: Fast, Performant and Private Transformer Inference with Mpc." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/li2023iclr-mpcformer/)

BibTeX

@inproceedings{li2023iclr-mpcformer,
  title     = {{Mpcformer: Fast, Performant and Private Transformer Inference with Mpc}},
  author    = {Li, Dacheng and Wang, Hongyi and Shao, Rulin and Guo, Han and Xing, Eric and Zhang, Hao},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/li2023iclr-mpcformer/}
}