Transforming Neural-Net Output Levels to Probability Distributions

Abstract

(1) The outputs of a typical multi-output classification network do not satisfy the axioms of probability; probabilities should be positive and sum to one. This problem can be solved by treating the trained network as a preprocessor that produces a feature vector that can be further processed, for instance by classical statistical estimation techniques. (2) We present a method for computing the first two moments ofthe probability distribution indicating the range of outputs that are consistent with the input and the training data. It is particularly useful to combine these two ideas: we implement the ideas of section 1 using Parzen windows, where the shape and relative size of each window is computed using the ideas of section 2. This allows us to make contact between important theoretical ideas (e.g. the ensemble formalism) and practical techniques (e.g. back-prop). Our results also shed new light on and generalize the well-known "soft max" scheme.

Cite

Text

Denker and LeCun. "Transforming Neural-Net Output Levels to Probability Distributions." Neural Information Processing Systems, 1990.

Markdown

[Denker and LeCun. "Transforming Neural-Net Output Levels to Probability Distributions." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/denker1990neurips-transforming/)

BibTeX

@inproceedings{denker1990neurips-transforming,
  title     = {{Transforming Neural-Net Output Levels to Probability Distributions}},
  author    = {Denker, John S. and LeCun, Yann},
  booktitle = {Neural Information Processing Systems},
  year      = {1990},
  pages     = {853-859},
  url       = {https://mlanthology.org/neurips/1990/denker1990neurips-transforming/}
}