Deconvolutional Networks

Abstract

Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.

Cite

Text

Zeiler et al. "Deconvolutional Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539957

Markdown

[Zeiler et al. "Deconvolutional Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/zeiler2010cvpr-deconvolutional/) doi:10.1109/CVPR.2010.5539957

BibTeX

@inproceedings{zeiler2010cvpr-deconvolutional,
  title     = {{Deconvolutional Networks}},
  author    = {Zeiler, Matthew D. and Krishnan, Dilip and Taylor, Graham W. and Fergus, Robert},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2010},
  pages     = {2528-2535},
  doi       = {10.1109/CVPR.2010.5539957},
  url       = {https://mlanthology.org/cvpr/2010/zeiler2010cvpr-deconvolutional/}
}