SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference
Abstract
Neural Architecture Search (NAS) for Federated Learning (FL) is an emerging field. It automates the design and training of Deep Neural Networks (DNNs) when data cannot be centralized due to privacy, communication costs, or regulatory restrictions. Recent federated NAS methods not only reduce manual effort but also help achieve higher accuracy than traditional FL methods like FedAvg. Despite the success, existing federated NAS methods still fall short in satisfying diverse deployment targets common in on-device inference including hardware, latency budgets, or variable battery levels. Most federated NAS methods search for only a limited range of neuro-architectural patterns, repeat them in a DNN, thereby restricting achievable performance. Moreover, these methods incur prohibitive training costs to satisfy deployment targets. They perform the training and search of DNN architectures repeatedly for each case. addresses these challenges by decoupling the training and search in federated NAS. co-trains a large number of diverse DNN architectures contained inside one supernet in the FL setting. Post-training, clients perform NAS locally to find specialized DNNs by extracting different parts of the trained supernet with no additional training. takes O(1) (instead of O(N )) cost to find specialized DNN architectures in FL for any N deployment targets. As part of , we introduce —a novel FL training algorithm that performs multi-objective federated optimization of DNN architectures (≈ 5 ∗ 108 ) under different client data distributions. achieves upto 37.7% higher accuracy or upto 8.13x reduction in MACs than existing federated NAS methods. Code is released at https://github.com/gatech-sysml/superfednas.
Cite
Text
Khare et al. "SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72986-7_10Markdown
[Khare et al. "SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/khare2024eccv-superfednas/) doi:10.1007/978-3-031-72986-7_10BibTeX
@inproceedings{khare2024eccv-superfednas,
title = {{SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference}},
author = {Khare, Alind and Agrawal, Animesh and Annavajjala, Aditya and Behnam, Payman and Lee, Myungjin and Latapie, Hugo M and Tumanov, Alexey},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72986-7_10},
url = {https://mlanthology.org/eccv/2024/khare2024eccv-superfednas/}
}