LiD-FL: Towards List-Decodable Federated Learning
Abstract
Federated learning is often used in environments with many unverified participants. Therefore, federated learning under adversarial attacks receives significant attention. This paper proposes an algorithmic framework for list-decodable federated learning, where a central server maintains a list of models, with at least one guaranteed to perform well. The framework has no strict restriction on the fraction of honest clients, extending the applicability of Byzantine federated learning to the scenario with more than half adversaries. Assuming the variance of gradient noise in stochastic gradient descent is bounded, we prove a convergence theorem of our method for strongly convex and smooth losses. Experimental results, including image classification tasks with both convex and non-convex losses, demonstrate that the proposed algorithm can withstand the malicious majority under various attacks.
Cite
Text
Liu et al. "LiD-FL: Towards List-Decodable Federated Learning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I18.34072Markdown
[Liu et al. "LiD-FL: Towards List-Decodable Federated Learning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/liu2025aaai-lid/) doi:10.1609/AAAI.V39I18.34072BibTeX
@inproceedings{liu2025aaai-lid,
title = {{LiD-FL: Towards List-Decodable Federated Learning}},
author = {Liu, Hong and Shan, Liren and Bao, Han and You, Ronghui and Yi, Yuhao and Lv, Jiancheng},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {18825-18833},
doi = {10.1609/AAAI.V39I18.34072},
url = {https://mlanthology.org/aaai/2025/liu2025aaai-lid/}
}