Differentiable Approximations of Fair OWA Optimization
Abstract
Decision processes in AI and operations research often involve parametric optimization problems, whose unknown parameters must be predicted from correlated data. In such settings, the Predict-Then-Optimize (PtO) paradigm trains parametric prediction models end-to-end with the subsequent optimization model. This paper extends PtO to handle the optimization of the nondifferentiable Ordered Weighted Averaging (OWA) objectives, known for their ability to ensure fair and robust solutions with respect to multiple objectives. By proposing efficient differentiable approximations of OWA optimization, it provides a framework for integrating fair optimization concepts with parametric prediction under uncertainty.
Cite
Text
Dinh et al. "Differentiable Approximations of Fair OWA Optimization." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.Markdown
[Dinh et al. "Differentiable Approximations of Fair OWA Optimization." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/dinh2024icmlw-differentiable/)BibTeX
@inproceedings{dinh2024icmlw-differentiable,
title = {{Differentiable Approximations of Fair OWA Optimization}},
author = {Dinh, My H and Kotary, James and Fioretto, Ferdinando},
booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/dinh2024icmlw-differentiable/}
}