Learning Continuous Distributions: Simulations with Field Theoretic Priors

Abstract

Learning of a smooth but nonparametric probability density can be reg(cid:173) ularized using methods of Quantum Field Theory. We implement a field theoretic prior numerically, test its efficacy, and show that the free pa(cid:173) rameter of the theory (,smoothness scale') can be determined self con(cid:173) sistently by the data; this forms an infinite dimensional generalization of the MDL principle. Finally, we study the implications of one's choice of the prior and the parameterization and conclude that the smoothness scale determination makes density estimation very weakly sensitive to the choice of the prior, and that even wrong choices can be advantageous for small data sets.

Cite

Text

Nemenman and Bialek. "Learning Continuous Distributions: Simulations with Field Theoretic Priors." Neural Information Processing Systems, 2000.

Markdown

[Nemenman and Bialek. "Learning Continuous Distributions: Simulations with Field Theoretic Priors." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/nemenman2000neurips-learning/)

BibTeX

@inproceedings{nemenman2000neurips-learning,
  title     = {{Learning Continuous Distributions: Simulations with Field Theoretic Priors}},
  author    = {Nemenman, Ilya and Bialek, William},
  booktitle = {Neural Information Processing Systems},
  year      = {2000},
  pages     = {287-293},
  url       = {https://mlanthology.org/neurips/2000/nemenman2000neurips-learning/}
}