Adaptive Deconvolutional Networks for Mid and High Level Feature Learning
Abstract
We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.
Cite
Text
Zeiler et al. "Adaptive Deconvolutional Networks for Mid and High Level Feature Learning." IEEE/CVF International Conference on Computer Vision, 2011. doi:10.1109/ICCV.2011.6126474Markdown
[Zeiler et al. "Adaptive Deconvolutional Networks for Mid and High Level Feature Learning." IEEE/CVF International Conference on Computer Vision, 2011.](https://mlanthology.org/iccv/2011/zeiler2011iccv-adaptive/) doi:10.1109/ICCV.2011.6126474BibTeX
@inproceedings{zeiler2011iccv-adaptive,
title = {{Adaptive Deconvolutional Networks for Mid and High Level Feature Learning}},
author = {Zeiler, Matthew D. and Taylor, Graham W. and Fergus, Rob},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2011},
pages = {2018-2025},
doi = {10.1109/ICCV.2011.6126474},
url = {https://mlanthology.org/iccv/2011/zeiler2011iccv-adaptive/}
}