On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality

Abstract

Current LLM customization typically relies on two deployment strategies: closed-source APIs, which require users to upload private data to external servers, and open-weight models, which allow local fine-tuning but pose misuse risks. In this paper, we argue that (1) deploying closed-source LLMs within user-controlled infrastructure (*on-premises deployment*) enhances data privacy and mitigates misuse risks, and (2) a well-designed on-premises deployment must ensure model confidentiality---by preventing model theft---and offer privacy-preserving customization. Prior research on small models has explored securing only the output layer within hardware-secured devices to balance confidentiality and customization efficiency. However, we show that this approach is insufficient for defending large-scale LLMs against distillation attacks. We therefore introduce a semi-open deployment framework that secures only a few, carefully chosen layers, achieving distillation resistance comparable to fully secured models while preserving fine-tuning flexibility. Through extensive experiments, we show that securing bottom layers significantly reduces functional extraction risks. Our findings demonstrate that privacy and confidentiality can coexist, paving the way for secure on-premises AI deployment that balances usability and protection.

Cite

Text

Huang et al. "On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality." ICLR 2025 Workshops: BuildingTrust, 2025.

Markdown

[Huang et al. "On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/huang2025iclrw-onpremises/)

BibTeX

@inproceedings{huang2025iclrw-onpremises,
  title     = {{On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality}},
  author    = {Huang, Hanbo and Li, Yihan and Jiang, Bowen and Liu, Lin and Jiang, Bo and Sun, Ruoyu and Liu, Zhuotao and Liang, Shiyu},
  booktitle = {ICLR 2025 Workshops: BuildingTrust},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/huang2025iclrw-onpremises/}
}