Input Space Mode Connectivity in Deep Neural Networks

Abstract

We extend the concept of loss landscape mode connectivity to the input space of deep neural networks. Initially studied in parameter space, mode connectivity describes the existence of low-loss paths between solutions (loss minimizers) found via gradient descent. We present theoretical and empirical evidence of its presence in the input space of deep networks, thereby highlighting the broader nature of the phenomenon. We observe that different input images with similar predictions are generally connected, and for trained models, the path tends to be simple, with only a small deviation from being a linear path. We conjecture that input space mode connectivity in high-dimensional spaces is a geometric phenomenon, present even in untrained models, and can be explained by percolation theory. We exploit mode connectivity to obtain new insights about adversarial examples and show its potential for adversarial detection and interpretability.

Cite

Text

Vrabel et al. "Input Space Mode Connectivity in Deep Neural Networks." NeurIPS 2024 Workshops: SciForDL, 2024.

Markdown

[Vrabel et al. "Input Space Mode Connectivity in Deep Neural Networks." NeurIPS 2024 Workshops: SciForDL, 2024.](https://mlanthology.org/neuripsw/2024/vrabel2024neuripsw-input/)

BibTeX

@inproceedings{vrabel2024neuripsw-input,
  title     = {{Input Space Mode Connectivity in Deep Neural Networks}},
  author    = {Vrabel, Jakub and Shem-Ur, Ori and Oz, Yaron and Krueger, David},
  booktitle = {NeurIPS 2024 Workshops: SciForDL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/vrabel2024neuripsw-input/}
}