HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture
Abstract
Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption. Prior PPNNs adopt mobile network architectures such as SqueezeNet for smaller computing overhead, but we find naïvely using mobile network architectures for a PPNN does not necessarily achieve shorter inference latency. Despite having less parameters, a mobile network architecture typically introduces more layers and increases the HE multiplicative depth of a PPNN, thereby prolonging its inference latency. In this paper, we propose a \textbf{HE}-friendly privacy-preserving \textbf{M}obile neural n\textbf{ET}work architecture, \textbf{HEMET}. Experimental results show that, compared to state-of-the-art (SOTA) PPNNs, HEMET reduces the inference latency by $59.3%\sim 61.2%$, and improves the inference accuracy by $0.4 % \sim 0.5%$.
Cite
Text
Lou and Jiang. "HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture." International Conference on Machine Learning, 2021.Markdown
[Lou and Jiang. "HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/lou2021icml-hemet/)BibTeX
@inproceedings{lou2021icml-hemet,
title = {{HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture}},
author = {Lou, Qian and Jiang, Lei},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {7102-7110},
volume = {139},
url = {https://mlanthology.org/icml/2021/lou2021icml-hemet/}
}