Unleashing Linear Optimizers for Group-Fair Learning and Optimization
Abstract
Most systems and learning algorithms optimize average performance or average loss–one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. This arises, for example, when balancing performance or loss with fairness across people. We prove that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance. Our main result is a polynomial-time reduction that uses a linear optimizer to optimize an arbitrary (Lipschitz continuous) function of performance over a (constant) number of possibly-overlapping groups. This includes fairness objectives over small numbers of groups, and we further point out that other existing notions of fairness such as individual fairness can be cast as convex optimization and hence more standard convex techniques can be used. Beyond learning, our approach applies to multi-objective optimization, more generally.
Cite
Text
Alabi et al. "Unleashing Linear Optimizers for Group-Fair Learning and Optimization." Annual Conference on Computational Learning Theory, 2018.Markdown
[Alabi et al. "Unleashing Linear Optimizers for Group-Fair Learning and Optimization." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/alabi2018colt-unleashing/)BibTeX
@inproceedings{alabi2018colt-unleashing,
title = {{Unleashing Linear Optimizers for Group-Fair Learning and Optimization}},
author = {Alabi, Daniel and Immorlica, Nicole and Kalai, Adam},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {2043-2066},
url = {https://mlanthology.org/colt/2018/alabi2018colt-unleashing/}
}