Fairness in Forecasting of Observations of Linear Dynamical Systems
Abstract
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. This behaviour can often be modelled as observations of an unknown dynamical system with an unobserved state. When the training data for the subgroups are not controlled carefully, however, under-representation bias arises. To counter under-representation bias, we introduce two natural notions of fairness in timeseries forecasting problems: subgroup fairness and instantaneous fairness. These notion extend predictive parity to the learning of dynamical systems. We also show globally convergent methods for the fairness-constrained learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. We also show that by exploiting sparsity in the convexifications, we can reduce the run time of our methods considerably. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
Cite
Text
Zhou et al. "Fairness in Forecasting of Observations of Linear Dynamical Systems." Journal of Artificial Intelligence Research, 2023. doi:10.1613/JAIR.1.14050Markdown
[Zhou et al. "Fairness in Forecasting of Observations of Linear Dynamical Systems." Journal of Artificial Intelligence Research, 2023.](https://mlanthology.org/jair/2023/zhou2023jair-fairness/) doi:10.1613/JAIR.1.14050BibTeX
@article{zhou2023jair-fairness,
title = {{Fairness in Forecasting of Observations of Linear Dynamical Systems}},
author = {Zhou, Quan and Marecek, Jakub and Shorten, Robert},
journal = {Journal of Artificial Intelligence Research},
year = {2023},
pages = {1247-1280},
doi = {10.1613/JAIR.1.14050},
volume = {76},
url = {https://mlanthology.org/jair/2023/zhou2023jair-fairness/}
}