Interpretability Analysis on a Pathology Foundation Model Reveals Biologically Relevant Embeddings Across Modalities
Abstract
Mechanistic interpretability has been explored in detail for large language models (LLMs). For the first time, we provide a preliminary investigation with similar interpretability methods for medical imaging. Specifically, we analyze the features from a ViT-Small encoder obtained from a pathology Foundation Model via application to two datasets: one dataset of pathology images, and one dataset of pathology images paired with spatial transcriptomics. We discover an interpretable representation of cell and tissue morphology, along with gene expression within the model embedding space. Our work paves the way for further exploration around interpretable feature dimensions and their utility for medical and clinical applications.
Cite
Text
Le et al. "Interpretability Analysis on a Pathology Foundation Model Reveals Biologically Relevant Embeddings Across Modalities." ICML 2024 Workshops: MI, 2024.Markdown
[Le et al. "Interpretability Analysis on a Pathology Foundation Model Reveals Biologically Relevant Embeddings Across Modalities." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/le2024icmlw-interpretability/)BibTeX
@inproceedings{le2024icmlw-interpretability,
title = {{Interpretability Analysis on a Pathology Foundation Model Reveals Biologically Relevant Embeddings Across Modalities}},
author = {Le, Nhat and Shen, Ciyue and Shah, Chintan and Martin, Blake and Shenker, Daniel and Padigela, Harshith and Hipp, Jennifer A. and Grullon, Sean and Abel, John and Pokkalla, Harsha Vardhan and Juyal, Dinkar},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/le2024icmlw-interpretability/}
}