Learning Unbiased Representations via Mutual Information Backpropagation
Abstract
We are interested in learning data-driven representations that can generalize well, even when trained on inherently biased data. In particular, we face the case where some attributes (bias) of the data, if learned by the model, can severely compromise its generalization properties. We tackle this problem through the lens of information theory, leveraging recent findings for a differentiable estimation of mutual information. We propose a novel end-to-end optimization strategy, which simultaneously estimates and minimizes the mutual information between the learned representation and specific data attributes. When applied on standard benchmarks, our model shows comparable or superior classification performance with respect to state-of-the-art approaches. Moreover, our method is general enough to be applicable to the problem of "algorithmic fairness", with competitive results.
Cite
Text
Ragonesi et al. "Learning Unbiased Representations via Mutual Information Backpropagation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00307Markdown
[Ragonesi et al. "Learning Unbiased Representations via Mutual Information Backpropagation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/ragonesi2021cvprw-learning/) doi:10.1109/CVPRW53098.2021.00307BibTeX
@inproceedings{ragonesi2021cvprw-learning,
title = {{Learning Unbiased Representations via Mutual Information Backpropagation}},
author = {Ragonesi, Ruggero and Volpi, Riccardo and Cavazza, Jacopo and Murino, Vittorio},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2021},
pages = {2729-2738},
doi = {10.1109/CVPRW53098.2021.00307},
url = {https://mlanthology.org/cvprw/2021/ragonesi2021cvprw-learning/}
}