FedAvP: Augment Local Data via Shared Policy in Federated Learning

Abstract

Federated Learning (FL) allows multiple clients to collaboratively train models without directly sharing their private data. While various data augmentation techniques have been actively studied in the FL environment, most of these methods share input-level or feature-level data information over communication, posing potential privacy leakage. In response to this challenge, we introduce a federated data augmentation algorithm named FedAvP that shares only the augmentation policies, not the data-related information. For data security and efficient policy search, we interpret the policy loss as a meta update loss in standard FL algorithms and utilize the first-order gradient information to further enhance privacy and reduce communication costs. Moreover, we propose a meta-learning method to search for adaptive personalized policies tailored to heterogeneous clients. Our approach outperforms existing best performing augmentation policy search methods and federated data augmentation methods, in the benchmarks for heterogeneous FL.

Cite

Text

Hong et al. "FedAvP: Augment Local Data via Shared Policy in Federated Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-0575

Markdown

[Hong et al. "FedAvP: Augment Local Data via Shared Policy in Federated Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/hong2024neurips-fedavp/) doi:10.52202/079017-0575

BibTeX

@inproceedings{hong2024neurips-fedavp,
  title     = {{FedAvP: Augment Local Data via Shared Policy in Federated Learning}},
  author    = {Hong, Minui and Yun, Junhyeog and Jeon, Insu and Kim, Gunhee},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0575},
  url       = {https://mlanthology.org/neurips/2024/hong2024neurips-fedavp/}
}