Model Agnostic Interpretability for Multiple Instance Learning

Abstract

In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.

Cite

Text

Early et al. "Model Agnostic Interpretability for Multiple Instance Learning." International Conference on Learning Representations, 2022.

Markdown

[Early et al. "Model Agnostic Interpretability for Multiple Instance Learning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/early2022iclr-model/)

BibTeX

@inproceedings{early2022iclr-model,
  title     = {{Model Agnostic Interpretability for Multiple Instance Learning}},
  author    = {Early, Joseph and Evers, Christine and Ramchurn, SArvapali},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/early2022iclr-model/}
}