What Do Deep CNNs Learn About Objects?

Abstract

Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters. What exactly do these parameters represent? Recent work has started to analyse CNN representations, finding that, e.g., they are invariant to some 2D transformations Fischer et al. (2014), but are confused by particular types of image noise Nguyen et al. (2014). In this work, we delve deeper and ask: how invariant are CNNs to object-class variations caused by 3D shape, pose, and photorealism?

Cite

Text

Peng et al. "What Do Deep CNNs Learn About Objects?." International Conference on Learning Representations, 2015.

Markdown

[Peng et al. "What Do Deep CNNs Learn About Objects?." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/peng2015iclr-deep/)

BibTeX

@inproceedings{peng2015iclr-deep,
  title     = {{What Do Deep CNNs Learn About Objects?}},
  author    = {Peng, Xingchao and Sun, Baochen and Ali, Karim and Saenko, Kate},
  booktitle = {International Conference on Learning Representations},
  year      = {2015},
  url       = {https://mlanthology.org/iclr/2015/peng2015iclr-deep/}
}