Certifiably Robust Model Evaluation in Federated Learning Under Meta-Distributional Shifts

Abstract

We address the challenge of certifying the performance of a federated learning model on an unseen target network using only measurements from the source network that trained the model. Specifically, consider a source network "A" with $K$ clients, each holding private, non-IID datasets drawn from heterogeneous distributions, modeled as samples from a broader meta-distribution $\mu$. Our goal is to provide certified guarantees for the model’s performance on a different, unseen network "B", governed by an unknown meta-distribution $\mu’$, assuming the deviation between $\mu$ and $\mu’$ is bounded—either in Wasserstein distance or an $f$-divergence. We derive worst-case uniform guarantees for both the model’s average loss and its risk CDF, the latter corresponding to a novel, adversarially robust version of the Dvoretzky–Kiefer–Wolfowitz (DKW) inequality. In addition, we show how the vanilla DKW bound enables principled certification of the model’s true performance on unseen clients within the same (source) network. Our bounds are efficiently computable, asymptotically minimax optimal, and preserve clients’ privacy. We also establish non-asymptotic generalization bounds that converge to zero as $K$ grows and the minimum per-client sample size exceeds $\mathcal{O}(\log K)$. Empirical evaluations confirm the practical utility of our bounds across real-world tasks. The project code is available at: github.com/samin-mehdizadeh/Robust-Evaluation-DKW

Cite

Text

Najafi et al. "Certifiably Robust Model Evaluation in Federated Learning Under Meta-Distributional Shifts." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Najafi et al. "Certifiably Robust Model Evaluation in Federated Learning Under Meta-Distributional Shifts." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/najafi2025icml-certifiably/)

BibTeX

@inproceedings{najafi2025icml-certifiably,
  title     = {{Certifiably Robust Model Evaluation in Federated Learning Under Meta-Distributional Shifts}},
  author    = {Najafi, Amir and Sani, Samin Mahdizadeh and Farnia, Farzan},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {45588-45623},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/najafi2025icml-certifiably/}
}