Visual Explanations for Convolutional Neural Networks via Latent Traversal of Generative Adversarial Networks (Student Abstract)

Abstract

Lack of explainability in artificial intelligence, specifically deep neural networks, remains a bottleneck for implementing models in practice. Popular techniques such as Gradient-weighted Class Activation Mapping (Grad-CAM) provide a coarse map of salient features in an image, which rarely tells the whole story of what a convolutional neural network(CNN) learned. Using COVID-19 chest X-rays, we present a method for interpreting what a CNN has learned by utilizing Generative Adversarial Networks (GANs). Our GAN framework disentangles lung structure from COVID-19 features. Using this GAN, we can visualize the transition of a pair of COVID negative lungs in a chest radiograph to a COVID positive pair by interpolating in the latent space of the GAN, which provides fine-grained visualization of how the CNN responds to varying features within the lungs.

Cite

Text

Dravid and Katsaggelos. "Visual Explanations for Convolutional Neural Networks via Latent Traversal of Generative Adversarial Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21606

Markdown

[Dravid and Katsaggelos. "Visual Explanations for Convolutional Neural Networks via Latent Traversal of Generative Adversarial Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/dravid2022aaai-visual/) doi:10.1609/AAAI.V36I11.21606

BibTeX

@inproceedings{dravid2022aaai-visual,
  title     = {{Visual Explanations for Convolutional Neural Networks via Latent Traversal of Generative Adversarial Networks (Student Abstract)}},
  author    = {Dravid, Amil and Katsaggelos, Aggelos K.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {12939-12940},
  doi       = {10.1609/AAAI.V36I11.21606},
  url       = {https://mlanthology.org/aaai/2022/dravid2022aaai-visual/}
}