Going Deeper with Convolutions
Abstract
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation of this architecture, GoogLeNet, a 22 layers deep network, was used to assess its quality in the context of object detection and classification.
Cite
Text
Szegedy et al. "Going Deeper with Convolutions." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298594Markdown
[Szegedy et al. "Going Deeper with Convolutions." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/szegedy2015cvpr-going/) doi:10.1109/CVPR.2015.7298594BibTeX
@inproceedings{szegedy2015cvpr-going,
title = {{Going Deeper with Convolutions}},
author = {Szegedy, Christian and Liu, Wei and Jia, Yangqing and Sermanet, Pierre and Reed, Scott and Anguelov, Dragomir and Erhan, Dumitru and Vanhoucke, Vincent and Rabinovich, Andrew},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298594},
url = {https://mlanthology.org/cvpr/2015/szegedy2015cvpr-going/}
}