Representation of Categories in Filters of Deep Neural Networks

Abstract

Transparency in decision-making is an essential aspect of the secure and unbiased application of deep learning for classification problems. Neural networks pre-trained on one dataset can serve as feature extractors to solve various tasks. In this work, I study how categories are represented in latent space of neural networks using an example of face recognition by a network trained without an explicit category for the human person. I propose a semantic-based approach to determine if a model has pre-trained filters for a given set of classes of interest and which layer is better suited for feature extraction. The method is similar to category-selectivity measures used in neuroscience to estimate tuning curves of neurons in high-level areas of the visual cortex.

Cite

Text

Malakhova. "Representation of Categories in Filters of Deep Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00265

Markdown

[Malakhova. "Representation of Categories in Filters of Deep Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/malakhova2018cvprw-representation/) doi:10.1109/CVPRW.2018.00265

BibTeX

@inproceedings{malakhova2018cvprw-representation,
  title     = {{Representation of Categories in Filters of Deep Neural Networks}},
  author    = {Malakhova, Katerina},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2018},
  pages     = {1973-1975},
  doi       = {10.1109/CVPRW.2018.00265},
  url       = {https://mlanthology.org/cvprw/2018/malakhova2018cvprw-representation/}
}