Are Convolutional Networks Inherently Foveated?

Abstract

When convolutional layers apply no padding, central pixels have more ways to contribute to the convolution than peripheral pixels. Such discrepancy grows exponentially with the number of layers, leading to implicit foveation of the input pixels. We show that this discrepancy can persist even when padding is applied. In particular, with the commonly-used zero-padding, foveation effects are significantly reduced but not eliminated. We explore how different aspects of convolution arithmetic impact the extent and magnitude of these effects, and elaborate on which alternative padding techniques can mitigate it. Finally, we compare our findings with foveation in human vision, suggesting that both effects possibly have similar nature and implications.

Cite

Text

Alsallakh et al. "Are Convolutional Networks Inherently Foveated?." NeurIPS 2021 Workshops: SVRHM, 2021.

Markdown

[Alsallakh et al. "Are Convolutional Networks Inherently Foveated?." NeurIPS 2021 Workshops: SVRHM, 2021.](https://mlanthology.org/neuripsw/2021/alsallakh2021neuripsw-convolutional/)

BibTeX

@inproceedings{alsallakh2021neuripsw-convolutional,
  title     = {{Are Convolutional Networks Inherently Foveated?}},
  author    = {Alsallakh, Bilal and Miglani, Vivek and Kokhlikyan, Narine and Adkins, David and Reblitz-Richardson, Orion},
  booktitle = {NeurIPS 2021 Workshops: SVRHM},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/alsallakh2021neuripsw-convolutional/}
}