Decoupling Vertical Federated Learning Using Local Self-Supervision

Abstract

Vertical Federated Learning (VFL) enables collaborative learning between clients who have disjoint features of common entities. However, standard VFL lacks fault tolerance, with each participant and connection being a single point of failure. Prior attempts to induce fault tolerance in VFL focus on the scenario of "straggling clients", usually entailing that all messages eventually arrive or that there is an upper bound on the number of late messages. To handle the more general problem of arbitrary crashes, we propose Decoupled VFL (DVFL). To handle training with faults, DVFL decouples training between communication rounds using local unsupervised objectives. By further decoupling label supervision from aggregation, DVFL also enables redundant aggregators. As secondary benefits, DVFL can enhance data efficiency and security against gradient-based attacks. In this work, we implement DVFL for split neural networks with a self-supervised autoencoder loss. This performs comparably to VFL on a split-MNIST task and degrades more gracefully under faults than our best VFL-based method. We also discuss its gradient privacy and demonstrate its data efficiency.

Cite

Text

Amalanshu et al. "Decoupling Vertical Federated Learning Using Local Self-Supervision." NeurIPS 2024 Workshops: SSL, 2024.

Markdown

[Amalanshu et al. "Decoupling Vertical Federated Learning Using Local Self-Supervision." NeurIPS 2024 Workshops: SSL, 2024.](https://mlanthology.org/neuripsw/2024/amalanshu2024neuripsw-decoupling/)

BibTeX

@inproceedings{amalanshu2024neuripsw-decoupling,
  title     = {{Decoupling Vertical Federated Learning Using Local Self-Supervision}},
  author    = {Amalanshu, Avi and Sirvi, Yash and Inouye, David I.},
  booktitle = {NeurIPS 2024 Workshops: SSL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/amalanshu2024neuripsw-decoupling/}
}