Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract)
Abstract
Deep learning models have shown great performances in natural language processing tasks. While much attention has been paid to improvements in utility, privacy leakage and social bias are two major concerns arising in trained models. In order to tackle these problems, we protect individuals' sensitive information and mitigate gender bias simultaneously. First, we propose a selective privacy-preserving method that only obscures individuals' sensitive information. Then we propose a negative multi-task learning framework to mitigate the gender bias which contains a main task and a gender prediction task. We analyze two existing word embeddings and evaluate them on sentiment analysis and a medical text classification task. Our experimental results show that our negative multi-task learning framework can mitigate the gender bias while keeping models’ utility.
Cite
Text
Gao et al. "Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26967Markdown
[Gao et al. "Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/gao2023aaai-fair/) doi:10.1609/AAAI.V37I13.26967BibTeX
@inproceedings{gao2023aaai-fair,
title = {{Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract)}},
author = {Gao, Liyuan and Zhan, Huixin and Chen, Austin and Sheng, Victor S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {16214-16215},
doi = {10.1609/AAAI.V37I13.26967},
url = {https://mlanthology.org/aaai/2023/gao2023aaai-fair/}
}