Listenable Maps for Zero-Shot Audio Classifiers

Abstract

Interpreting the decisions of deep learning models, including audio classifiers, is crucial for ensuring the transparency and trustworthiness of this technology. In this paper, we introduce LMAC-ZS (Listenable Maps for Zero-Shot Audio Classifiers), which, to the best of our knowledge, is the first decoder-based post-hoc explanation method for explaining the decisions of zero-shot audio classifiers. The proposed method utilizes a novel loss function that aims to closely reproduce the original similarity patterns between text-and-audio pairs in the generated explanations. We provide an extensive evaluation using the Contrastive Language-Audio Pretraining (CLAP) model to showcase that our interpreter remains faithful to the decisions in a zero-shot classification context. Moreover, we qualitatively show that our method produces meaningful explanations that correlate well with different text prompts.

Cite

Text

Paissan et al. "Listenable Maps for Zero-Shot Audio Classifiers." Neural Information Processing Systems, 2024. doi:10.52202/079017-2087

Markdown

[Paissan et al. "Listenable Maps for Zero-Shot Audio Classifiers." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/paissan2024neurips-listenable/) doi:10.52202/079017-2087

BibTeX

@inproceedings{paissan2024neurips-listenable,
  title     = {{Listenable Maps for Zero-Shot Audio Classifiers}},
  author    = {Paissan, Francesco and Della Libera, Luca and Ravanelli, Mirco and Subakan, Cem},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2087},
  url       = {https://mlanthology.org/neurips/2024/paissan2024neurips-listenable/}
}