Gradient Information for Representation and Modeling

Abstract

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information. These measures serve as surrogates for classical information measures such as those based on logarithmic loss, Kullback-Leibler divergence, directed Shannon information, etc. in many data-processing scenarios of interest, and often provide significant computational advantage, improved stability and robustness. As an example, we apply these measures to the Chow-Liu tree algorithm, and demonstrate remarkable performance and significant computational reduction using both synthetic and real data.

Cite

Text

Ding et al. "Gradient Information for Representation and Modeling." Neural Information Processing Systems, 2019.

Markdown

[Ding et al. "Gradient Information for Representation and Modeling." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/ding2019neurips-gradient/)

BibTeX

@inproceedings{ding2019neurips-gradient,
  title     = {{Gradient Information for Representation and Modeling}},
  author    = {Ding, Jie and Calderbank, Robert and Tarokh, Vahid},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {2396-2405},
  url       = {https://mlanthology.org/neurips/2019/ding2019neurips-gradient/}
}