We Need Far Fewer Unique Filters than We Thought

Abstract

We challenge the conventional belief that CNNs require numerous distinct kernels for effective image classification. Our study on depthwise separable CNNs (DS-CNNs) reveals that a drastically reduced set of unique filters can maintain performance. Replacing thousands of trained filters in ConvNextv2 with the closest linear transform from a small filter set, results in small accuracy drops. Remarkably, initializing depthwise filters with \textbf{only 8 unique frozen filters}, achieves minimal accuracy drop on ImageNet. Our findings question the necessity of numerous filters in DS-CNNs, offering insights into more efficient network designs.

Cite

Text

Babaiee et al. "We Need Far Fewer Unique Filters than We Thought." NeurIPS 2024 Workshops: SciForDL, 2024.

Markdown

[Babaiee et al. "We Need Far Fewer Unique Filters than We Thought." NeurIPS 2024 Workshops: SciForDL, 2024.](https://mlanthology.org/neuripsw/2024/babaiee2024neuripsw-we/)

BibTeX

@inproceedings{babaiee2024neuripsw-we,
  title     = {{We Need Far Fewer Unique Filters than We Thought}},
  author    = {Babaiee, Zahra and Kiasari, Peyman and Rus, Daniela and Grosu, Radu},
  booktitle = {NeurIPS 2024 Workshops: SciForDL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/babaiee2024neuripsw-we/}
}