Edge Importance Scores for Editing Graph Topology to Preserve Fairness
Abstract
Graph neural networks have shown promising performance on graph analytical tasks such as node classification and link prediction, contributing to great advances in many graph-based applications. Despite the success of graph neural networks, most of them lack fairness considerations. Consequently, they could yield discriminatory results towards certain populations when such algorithms are exploited in high stakes applications. In this work, we study the problem of predictive bias propagated by relational information, and subsequently propose an in-training edge editing approach to promote fairness. We introduce the notions of faithfulness and unfairness for an edge in a graph, and use it as prior knowledge to edit graph topology and improve fairness.
Cite
Text
Tanneru. "Edge Importance Scores for Editing Graph Topology to Preserve Fairness." ICML 2023 Workshops: TAGML, 2023.Markdown
[Tanneru. "Edge Importance Scores for Editing Graph Topology to Preserve Fairness." ICML 2023 Workshops: TAGML, 2023.](https://mlanthology.org/icmlw/2023/tanneru2023icmlw-edge/)BibTeX
@inproceedings{tanneru2023icmlw-edge,
title = {{Edge Importance Scores for Editing Graph Topology to Preserve Fairness}},
author = {Tanneru, Sree Harsha},
booktitle = {ICML 2023 Workshops: TAGML},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/tanneru2023icmlw-edge/}
}