De-Biased Attention Supervision for Text Classification with Causality
Abstract
In text classification models, while the unsupervised attention mechanism can enhance performance, it often produces attention distributions that are puzzling to humans, such as assigning high weight to seemingly insignificant conjunctions. Recently, numerous studies have explored Attention Supervision (AS) to guide the model toward more interpretable attention distributions. However, such AS can impact classification performance, especially in specialized domains. In this paper, we address this issue from a causality perspective. Firstly, we leverage the causal graph to reveal two biases in the AS: 1) Bias caused by the label distribution of the dataset. 2) Bias caused by the words' different occurrence ranges that some words can occur across labels while others only occur in a particular label. We then propose a novel De-biased Attention Supervision (DAS) method to eliminate these biases with causal techniques. Specifically, we adopt backdoor adjustment on the label-caused bias and reduce the word-caused bias by subtracting the direct causal effect of the word. Through extensive experiments on two professional text classification datasets (e.g., medicine and law), we demonstrate that our method achieves improved classification accuracy along with more coherent attention distributions.
Cite
Text
Wu et al. "De-Biased Attention Supervision for Text Classification with Causality." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29897Markdown
[Wu et al. "De-Biased Attention Supervision for Text Classification with Causality." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/wu2024aaai-de/) doi:10.1609/AAAI.V38I17.29897BibTeX
@inproceedings{wu2024aaai-de,
title = {{De-Biased Attention Supervision for Text Classification with Causality}},
author = {Wu, Yiquan and Liu, Yifei and Zhao, Ziyu and Lu, Weiming and Zhang, Yating and Sun, Changlong and Wu, Fei and Kuang, Kun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {19279-19287},
doi = {10.1609/AAAI.V38I17.29897},
url = {https://mlanthology.org/aaai/2024/wu2024aaai-de/}
}