Explaining Word Embeddings with Perfect Fidelity: A Case Study in Predicting Research Impact
Abstract
The best-performing approaches for scholarly document quality prediction are based on embedding models. In addition to their performance when used in classifiers, embedding models can also provide predictions even for words that were not contained in the labelled training data for the classification model, which is important in the context of the ever-evolving research terminology. Although model-agnostic explanation methods, such as Local interpretable model-agnostic explanations, can be applied to explain machine learning classifiers trained on embedding models, these produce results with questionable correspondence to the model. We introduce a new feature importance method, Self-Model Entities Rated (SMER), for logistic regression-based classification models trained on word embeddings. We show that SMER has theoretically perfect fidelity with the explained model, as the average of logits of SMER scores for individual words (SMER explanation) exactly corresponds to the logit of the prediction of the explained model. Quantitative and qualitative evaluation is performed through five diverse experiments conducted on 50,000 research articles (papers) from the CORD-19 corpus. Through an AOPC curve analysis, we experimentally demonstrate that SMER produces better explanations than LIME, SHAP and global tree surrogates.
Cite
Text
Dvorackova et al. "Explaining Word Embeddings with Perfect Fidelity: A Case Study in Predicting Research Impact." Machine Learning, 2025. doi:10.1007/S10994-025-06870-6Markdown
[Dvorackova et al. "Explaining Word Embeddings with Perfect Fidelity: A Case Study in Predicting Research Impact." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/dvorackova2025mlj-explaining/) doi:10.1007/S10994-025-06870-6BibTeX
@article{dvorackova2025mlj-explaining,
title = {{Explaining Word Embeddings with Perfect Fidelity: A Case Study in Predicting Research Impact}},
author = {Dvorackova, Lucie and Joachimiak, Marcin P. and Cerny, Michal and Kubecova, Adriana and Sklenák, Vilém and Kliegr, Tomás},
journal = {Machine Learning},
year = {2025},
pages = {265},
doi = {10.1007/S10994-025-06870-6},
volume = {114},
url = {https://mlanthology.org/mlj/2025/dvorackova2025mlj-explaining/}
}