Encryption-Friendly LLM Architecture
Abstract
Large language models (LLMs) offer personalized responses based on user interactions, but this use case raises serious privacy concerns. Homomorphic encryption (HE) is a cryptographic protocol supporting arithmetic computations in encrypted states and provides a potential solution for privacy-preserving machine learning (PPML). However, the computational intensity of transformers poses challenges for applying HE to LLMs. In this work, we propose a modified HE-friendly transformer architecture with an emphasis on inference following personalized (private) fine-tuning. Utilizing LoRA fine-tuning and Gaussian kernels, we achieve significant computational speedups---6.94$\times$ for fine-tuning and 2.3$\times$ for inference---while maintaining performance comparable to plaintext models. Our findings provide a viable proof of concept for offering privacy-preserving LLM services in areas where data protection is crucial. Our code is available on GitHub.
Cite
Text
Rho et al. "Encryption-Friendly LLM Architecture." International Conference on Learning Representations, 2025.Markdown
[Rho et al. "Encryption-Friendly LLM Architecture." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/rho2025iclr-encryptionfriendly/)BibTeX
@inproceedings{rho2025iclr-encryptionfriendly,
title = {{Encryption-Friendly LLM Architecture}},
author = {Rho, Donghwan and Kim, Taeseong and Park, Minje and Kim, Jung Woo and Chae, Hyunsik and Ryu, Ernest K. and Cheon, Jung Hee},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/rho2025iclr-encryptionfriendly/}
}