Debugging the Internals of Convolutional Networks

Abstract

The filters learned by Convolutional Neural Networks (CNNs) and the feature maps these filters compute are sensitive to convolution arithmetic. Several architectural choices that dictate this arithmetic can result in feature-map artifacts. These artifacts can interfere with the downstream task and impact the accuracy and robustness. We provide a number of visual-debugging means to surface feature-map artifacts and to analyze how they emerge in CNNs. Our means help analyze the impact of these artifacts on the weights learned by the model. Guided by our analysis, model developers can make informed architectural choices that can verifiably mitigate harmful artifacts and improve the model’s accuracy and its shift robustness.

Cite

Text

Alsallakh et al. "Debugging the Internals of Convolutional Networks." NeurIPS 2021 Workshops: XAI4Debugging, 2021.

Markdown

[Alsallakh et al. "Debugging the Internals of Convolutional Networks." NeurIPS 2021 Workshops: XAI4Debugging, 2021.](https://mlanthology.org/neuripsw/2021/alsallakh2021neuripsw-debugging/)

BibTeX

@inproceedings{alsallakh2021neuripsw-debugging,
  title     = {{Debugging the Internals of Convolutional Networks}},
  author    = {Alsallakh, Bilal and Kokhlikyan, Narine and Miglani, Vivek and Muttepawar, Shubham and Wang, Edward and Zhang, Sara and Adkins, David and Reblitz-Richardson, Orion},
  booktitle = {NeurIPS 2021 Workshops: XAI4Debugging},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/alsallakh2021neuripsw-debugging/}
}