Gradient-Based Explanations for Deep Learning Survival Models
Abstract
Deep learning survival models often outperform classical methods in time-to-event predictions, particularly in personalized medicine, but their "black box" nature hinders broader adoption. We propose a framework for gradient-based explanation methods tailored to survival neural networks, extending their use beyond regression and classification. We analyze the implications of their theoretical assumptions for time-dependent explanations in the survival setting and propose effective visualizations incorporating the temporal dimension. Experiments on synthetic data show that gradient-based methods capture the magnitude and direction of local and global feature effects, including time dependencies. We introduce GradSHAP(t), a gradient-based counterpart to SurvSHAP(t), which outperforms SurvSHAP(t) and SurvLIME in a computational speed vs. accuracy trade-off. Finally, we apply these methods to medical data with multi-modal inputs, revealing relevant tabular features and visual patterns, as well as their temporal dynamics.
Cite
Text
Langbein et al. "Gradient-Based Explanations for Deep Learning Survival Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Langbein et al. "Gradient-Based Explanations for Deep Learning Survival Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/langbein2025icml-gradientbased/)BibTeX
@inproceedings{langbein2025icml-gradientbased,
title = {{Gradient-Based Explanations for Deep Learning Survival Models}},
author = {Langbein, Sophie Hanna and Koenen, Niklas and Wright, Marvin N.},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {32492-32522},
volume = {267},
url = {https://mlanthology.org/icml/2025/langbein2025icml-gradientbased/}
}