Is Each Layer Non-Trivial in CNN? (Student Abstract)

Abstract

Convolutional neural network (CNN) models have achieved great success in many fields. With the advent of ResNet, networks used in practice are getting deeper and wider. However, is each layer non-trivial in networks? To answer this question, we trained a network on the training set, then we replace the network convolution kernels with zeros and test the result models on the test set. We compared experimental results with baseline and showed that we can reach similar or even the same performances. Although convolution kernels are the cores of networks, we demonstrate that some of them are trivial and regular in ResNet.

Cite

Text

Wang et al. "Is Each Layer Non-Trivial in CNN? (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I18.17954

Markdown

[Wang et al. "Is Each Layer Non-Trivial in CNN? (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/wang2021aaai-each/) doi:10.1609/AAAI.V35I18.17954

BibTeX

@inproceedings{wang2021aaai-each,
  title     = {{Is Each Layer Non-Trivial in CNN? (Student Abstract)}},
  author    = {Wang, Wei and Zhu, Yanjie and Cui, Zhuoxu and Liang, Dong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {15915-15916},
  doi       = {10.1609/AAAI.V35I18.17954},
  url       = {https://mlanthology.org/aaai/2021/wang2021aaai-each/}
}