Differentially Private CutMix for Split Learning with Vision Transformer

Abstract

Recently, vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks. Considering privacy-preserving distributed learning with ViT, federated learning (FL) communicates models, which becomes ill-suited due to ViT's large model size and computing costs. Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT's smashed data and input data. Motivated by this problem, we propose \textit{DP-CutMixSL}, a differentially private (DP) SL framework by developing \textit{DP patch-level randomized CutMix (DP-CutMix)}, a novel privacy-preserving inter-client interpolation scheme that replaces randomly selected patches in smashed data. By experiment, we show that DP-CutMixSL not only boosts privacy guarantees and communication efficiency, but also achieves higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that DP-CutMix amplifies R\'enyi DP (RDP), which is upper-bounded by its Vanilla Mixup counterpart.

Cite

Text

Oh et al. "Differentially Private CutMix for Split Learning with Vision Transformer." NeurIPS 2022 Workshops: INTERPOLATE, 2022.

Markdown

[Oh et al. "Differentially Private CutMix for Split Learning with Vision Transformer." NeurIPS 2022 Workshops: INTERPOLATE, 2022.](https://mlanthology.org/neuripsw/2022/oh2022neuripsw-differentially/)

BibTeX

@inproceedings{oh2022neuripsw-differentially,
  title     = {{Differentially Private CutMix for Split Learning with Vision Transformer}},
  author    = {Oh, Seungeun and Park, Jihong and Baek, Sihun and Nam, Hyelin and Vepakomma, Praneeth and Raskar, Ramesh and Bennis, Mehdi and Kim, Seong-Lyun},
  booktitle = {NeurIPS 2022 Workshops: INTERPOLATE},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/oh2022neuripsw-differentially/}
}