Extended LSTMs for Knowledge Tracing: Peeking Inside the Black Box (Student Abstract)
Abstract
This paper proposes extended Long Short-Term Memory (LSTM) networks for the knowledge tracing task and employs explainable AI methods to address interpretability issues. Specifically, we developed an extended LSTM-based model to automatically diagnose students' knowledge states. We then leveraged three interpreting methods—gradient sensitivity, gradient*input, and Deep SHAP—to explain the model's predictions by computing input contributions. The results demonstrate that the proposed model outperforms DKT, and the three methods effectively explain its predictions. Additionally, we identified three key insights into the model's working mechanisms.
Cite
Text
Wang et al. "Extended LSTMs for Knowledge Tracing: Peeking Inside the Black Box (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35312Markdown
[Wang et al. "Extended LSTMs for Knowledge Tracing: Peeking Inside the Black Box (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-extended/) doi:10.1609/AAAI.V39I28.35312BibTeX
@inproceedings{wang2025aaai-extended,
title = {{Extended LSTMs for Knowledge Tracing: Peeking Inside the Black Box (Student Abstract)}},
author = {Wang, Deliang and Lu, Yu and Chen, Gaowei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29524-29526},
doi = {10.1609/AAAI.V39I28.35312},
url = {https://mlanthology.org/aaai/2025/wang2025aaai-extended/}
}