Language Model Meets Prototypes: Towards Interpretable Text Classification Models Through Prototypical Networks

Abstract

Pretrained transformer-based Language Models (LMs) are well-known for their ability to achieve significant improvement on NLP tasks, but their black-box nature, which leads to a lack of interpretability, has been a major concern. My dissertation focuses on developing intrinsically interpretable models when using LMs as encoders while maintaining their superior performance via prototypical networks. I initiated my research by investigating enhancements in performance for interpretable models of sarcasm detection. My proposed approach focuses on capturing sentiment incongruity to enhance accuracy while offering instance-based explanations for the classification decisions. Later, we develop a novel white-box multi-head graph attention-based prototypical framework designed to explain the decisions of text classification models without sacrificing the accuracy of the original black-box LMs. In addition, I am working on extending the attention-based prototypical framework with contrastive learning to redesign an interpretable graph neural network for document classification, aiming to enhance both the interpretability and performance of the model in document classification.

Cite

Text

Wen. "Language Model Meets Prototypes: Towards Interpretable Text Classification Models Through Prototypical Networks." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35231

Markdown

[Wen. "Language Model Meets Prototypes: Towards Interpretable Text Classification Models Through Prototypical Networks." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wen2025aaai-language/) doi:10.1609/AAAI.V39I28.35231

BibTeX

@inproceedings{wen2025aaai-language,
  title     = {{Language Model Meets Prototypes: Towards Interpretable Text Classification Models Through Prototypical Networks}},
  author    = {Wen, Ximing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29307-29308},
  doi       = {10.1609/AAAI.V39I28.35231},
  url       = {https://mlanthology.org/aaai/2025/wen2025aaai-language/}
}