Multi-Output Distributional Fairness via Post-Processing
Abstract
The post-processing approaches are becoming prominent techniques to enhance machine learning models' fairness because of their intuitiveness, low computational cost, and excellent scalability. However, most existing post-processing methods are designed for task-specific fairness measures and are limited to single-output models. In this paper, we introduce a post-processing method for multi-output models, such as the ones used for multi-task/multi-class classification and representation learning, to enhance a model's distributional parity, a task-agnostic fairness measure. Existing methods for achieving distributional parity rely on the (inverse) cumulative density function of a model’s output, restricting their applicability to single-output models. Extending previous works, we propose to employ optimal transport mappings to move a model's outputs across different groups towards their empirical Wasserstein barycenter. An approximation technique is applied to reduce the complexity of computing the exact barycenter and a kernel regression method is proposed to extend this process to out-of-sample data. Our empirical studies evaluate the proposed approach against various baselines on multi-task/multi-class classification and representation learning tasks, demonstrating the effectiveness of the proposed approach.
Cite
Text
Li et al. "Multi-Output Distributional Fairness via Post-Processing." Transactions on Machine Learning Research, 2025.Markdown
[Li et al. "Multi-Output Distributional Fairness via Post-Processing." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/li2025tmlr-multioutput/)BibTeX
@article{li2025tmlr-multioutput,
title = {{Multi-Output Distributional Fairness via Post-Processing}},
author = {Li, Gang and Lin, Qihang and Ghosh, Ayush and Yang, Tianbao},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/li2025tmlr-multioutput/}
}