Nimbus: Secure and Efficient Two-Party Inference for Transformers

Abstract

Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\mathsf{GELU}$ and $\mathsf{Softmax}$. This work presents a new two-party inference framework $\mathsf{Nimbus}$ for Transformer models. Specifically, we propose a new 2PC paradigm to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\times \sim 12.5\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. Furthermore, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\mathsf{GELU}$ and $\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\times \sim 4.0\times$, where the average accuracy loss of our approach is 0.08\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\mathsf{Nimbus}$ improves the end-to-end performance of $BERT_{base}$ inference by $2.7\times \sim 4.7\times$ across different network settings.

Cite

Text

Li et al. "Nimbus: Secure and Efficient Two-Party Inference for Transformers." Neural Information Processing Systems, 2024. doi:10.52202/079017-0680

Markdown

[Li et al. "Nimbus: Secure and Efficient Two-Party Inference for Transformers." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/li2024neurips-nimbus/) doi:10.52202/079017-0680

BibTeX

@inproceedings{li2024neurips-nimbus,
  title     = {{Nimbus: Secure and Efficient Two-Party Inference for Transformers}},
  author    = {Li, Zhengyi and Yang, Kang and Tan, Jin and Lu, Wen-jie and Wu, Haoqi and Wang, Xiao and Yu, Yu and Zhao, Derun and Zheng, Yancheng and Guo, Minyi and Leng, Jingwen},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0680},
  url       = {https://mlanthology.org/neurips/2024/li2024neurips-nimbus/}
}