Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?
Abstract
This paper aims to study the geometrical structure present in a CNN filter space for investigating redundancy or importance of an individual filter. In particular, this paper analyses the convolutional layer filter space using simplical geometry to establish a relation between filter relevance and their location on the simplex. Convex combination of extremal points of a simplex can span the entire volume of the simplex. As a result, these points are inherently the most relevant components. Based on this principle, we hypothesize a notion that filters lying near these extremal points of a simplex modelling the filter space are least redundant filters and vice-versa. We validate this positional relevance hypothesis by successfully employing it for data-independent filter ranking and artificial filter fabrication in trained convolutional neural networks. The empirical analysis on different CNN architectures such as ResNet-50 and VGG-16 provide strong evidence in favour of the postulated positional relevance hypothesis.
Cite
Text
Thakur et al. "Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?." NeurIPS 2022 Workshops: NeurReps, 2022.Markdown
[Thakur et al. "Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/thakur2022neuripsw-geometric/)BibTeX
@inproceedings{thakur2022neuripsw-geometric,
title = {{Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?}},
author = {Thakur, Anshul and Abrol, Vinayak and Sharma, Pulkit},
booktitle = {NeurIPS 2022 Workshops: NeurReps},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/thakur2022neuripsw-geometric/}
}