Striving for Simplicity: The All Convolutional Net
Abstract
Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.
Cite
Text
Springenberg et al. "Striving for Simplicity: The All Convolutional Net." International Conference on Learning Representations, 2015.Markdown
[Springenberg et al. "Striving for Simplicity: The All Convolutional Net." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/springenberg2015iclr-striving/)BibTeX
@inproceedings{springenberg2015iclr-striving,
title = {{Striving for Simplicity: The All Convolutional Net}},
author = {Springenberg, Jost Tobias and Dosovitskiy, Alexey and Brox, Thomas and Riedmiller, Martin A.},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/springenberg2015iclr-striving/}
}