Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

Abstract

Fisher vector (FV) classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems. However, both are generally considered `black box' predictors as the non-linear transformations involved have so far prevented transparent and interpretable reasoning. Recently, a principled technique, Layer-wise Relevance Propagation (LRP), has been developed in order to better comprehend the inherent structured reasoning of complex nonlinear classification models such as Bag of Feature models or DNNs. In this paper we (1) extend the LRP framework also for Fisher vector classifiers and then use it as analysis tool to (2) quantify the importance of context for classification, (3) qualitatively compare DNNs against FV classifiers in terms of important image regions and (4) detect potential flaws and biases in data. All experiments are performed on the PASCAL VOC 2007 and ILSVRC 2012 data sets.

Cite

Text

Lapuschkin et al. "Analyzing Classifiers: Fisher Vectors and Deep Neural Networks." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.318

Markdown

[Lapuschkin et al. "Analyzing Classifiers: Fisher Vectors and Deep Neural Networks." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/lapuschkin2016cvpr-analyzing/) doi:10.1109/CVPR.2016.318

BibTeX

@inproceedings{lapuschkin2016cvpr-analyzing,
  title     = {{Analyzing Classifiers: Fisher Vectors and Deep Neural Networks}},
  author    = {Lapuschkin, Sebastian and Binder, Alexander and Montavon, Gregoire and Muller, Klaus-Robert and Samek, Wojciech},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.318},
  url       = {https://mlanthology.org/cvpr/2016/lapuschkin2016cvpr-analyzing/}
}