Holographic Feature Representations of Deep Networks

Abstract

It is often asserted that deep networks learn ``"features", traditionally expressed by the activations of intermediate nodes. We explore an alternative concept by defining features as partial derivatives of model output with respect to model parameters---extending a simple yet powerful idea from generalized linear models. The resulting features are not equivalent to node activations, and we show that they can induce a holographic representation of the complete model: the network's output on given data can be exactly replicated by a simple linear model over such features extracted from any ordered cut. We demonstrate useful advantages for this feature representation over standard representations based on node activations.

Cite

Text

Zinkevich et al. "Holographic Feature Representations of Deep Networks." Conference on Uncertainty in Artificial Intelligence, 2017.

Markdown

[Zinkevich et al. "Holographic Feature Representations of Deep Networks." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/zinkevich2017uai-holographic/)

BibTeX

@inproceedings{zinkevich2017uai-holographic,
  title     = {{Holographic Feature Representations of Deep Networks}},
  author    = {Zinkevich, Martin A. and Davies, Alex and Schuurmans, Dale},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2017},
  url       = {https://mlanthology.org/uai/2017/zinkevich2017uai-holographic/}
}