Fair Text Classification via Transferable Representations
Abstract
Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. We propose an approach that extends the use of the Wasserstein Dependency Measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text encoder, we draw inspiration from adversarial training by inducing independence between representations learned for the target label and those for a sensitive attribute. We further show that domain adaptation can be efficiently leveraged to remove the need for access to the sensitive attributes in the data set we cure. We provide both theoretical and empirical evidence that our approach is well-founded.
Cite
Text
Leteno et al. "Fair Text Classification via Transferable Representations." Journal of Machine Learning Research, 2025.Markdown
[Leteno et al. "Fair Text Classification via Transferable Representations." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/leteno2025jmlr-fair/)BibTeX
@article{leteno2025jmlr-fair,
title = {{Fair Text Classification via Transferable Representations}},
author = {Leteno, Thibaud and Perrot, Michael and Laclau, Charlotte and Gourru, Antoine and Gravier, Christophe},
journal = {Journal of Machine Learning Research},
year = {2025},
pages = {1-47},
volume = {26},
url = {https://mlanthology.org/jmlr/2025/leteno2025jmlr-fair/}
}