High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent

Abstract

In this paper, we study differentially private empirical risk minimization (DP-ERM). It has been shown that the worst-case utility of DP-ERM reduces polynomially as the dimension increases. This is a major obstacle to privately learning large machine learning models. In high dimension, it is common for some model’s parameters to carry more information than others. To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients’ (approximately) greatest entry. We show theoretically that DP-GCD can achieve a logarithmic dependence on the dimension for a wide range of problems by naturally exploiting their structural properties (such as quasi-sparse solutions). We illustrate this behavior numerically, both on synthetic and real datasets.

Cite

Text

Mangold et al. "High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent." Artificial Intelligence and Statistics, 2023.

Markdown

[Mangold et al. "High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/mangold2023aistats-highdimensional/)

BibTeX

@inproceedings{mangold2023aistats-highdimensional,
  title     = {{High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent}},
  author    = {Mangold, Paul and Bellet, Aurélien and Salmon, Joseph and Tommasi, Marc},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {4894-4916},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/mangold2023aistats-highdimensional/}
}