Quick Training of Probabilistic Neural Nets by Importance Sampling

Abstract

Our previous work on statistical language modeling introduced the use of probabilistic feedforward neural networks to help dealing with the curse of dimensionality. Training this model by maximum likelihood however requires for each example to perform as many network passes as there are words in the vocabulary. Inspired by the contrastive divergence model, we propose and evaluate sampling-based methods which require network passes only for the observed "positive example" and a few sampled negative example words. A very significant speed-up is obtained with an adaptive importance sampling.

Cite

Text

Bengio and Senecal. "Quick Training of Probabilistic Neural Nets by Importance Sampling." Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.

Markdown

[Bengio and Senecal. "Quick Training of Probabilistic Neural Nets by Importance Sampling." Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.](https://mlanthology.org/aistats/2003/bengio2003aistats-quick/)

BibTeX

@inproceedings{bengio2003aistats-quick,
  title     = {{Quick Training of Probabilistic Neural Nets by Importance Sampling}},
  author    = {Bengio, Yoshua and Senecal, Jean-Sébastien},
  booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics},
  year      = {2003},
  pages     = {17-24},
  volume    = {R4},
  url       = {https://mlanthology.org/aistats/2003/bengio2003aistats-quick/}
}