Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare

Abstract

Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.

Cite

Text

Cousins. "Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare." Artificial Intelligence and Statistics, 2023.

Markdown

[Cousins. "Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/cousins2023aistats-revisiting/)

BibTeX

@inproceedings{cousins2023aistats-revisiting,
  title     = {{Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare}},
  author    = {Cousins, Cyrus},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {6422-6442},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/cousins2023aistats-revisiting/}
}