Adaptive Clipping for Differential Private Federated Learning in Interpolation Regimes

Abstract

We investigate improving the utility of standard differential private optimization algorithms by adaptively determining the clipping radius in federated learning. Our adaptive clipping radius is based on the root-mean-square of the gradient norms, motivated by the interpolation property and smoothness of the objectives. In addition to Renyi Differential Privacy (RDP) analysis, we conduct theoretical utility analysis of the proposed algorithm, showing that our method enhances utility compared to DP-SGD for smooth and non-strongly convex objectives. Numerical experiments confirm the superiority of our adaptive clipping algorithm over standard DP optimization with fixed clipping radius in federated learning settings.

Cite

Text

Fukami et al. "Adaptive Clipping for Differential Private Federated Learning in Interpolation Regimes." Transactions on Machine Learning Research, 2025.

Markdown

[Fukami et al. "Adaptive Clipping for Differential Private Federated Learning in Interpolation Regimes." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/fukami2025tmlr-adaptive/)

BibTeX

@article{fukami2025tmlr-adaptive,
  title     = {{Adaptive Clipping for Differential Private Federated Learning in Interpolation Regimes}},
  author    = {Fukami, Takumi and Murata, Tomoya and Niwa, Kenta},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/fukami2025tmlr-adaptive/}
}