A Fast, Compact Approximation of the Exponential Function

Abstract

Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This article describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact.

Cite

Text

Schraudolph. "A Fast, Compact Approximation of the Exponential Function." Neural Computation, 1999. doi:10.1162/089976699300016467

Markdown

[Schraudolph. "A Fast, Compact Approximation of the Exponential Function." Neural Computation, 1999.](https://mlanthology.org/neco/1999/schraudolph1999neco-fast/) doi:10.1162/089976699300016467

BibTeX

@article{schraudolph1999neco-fast,
  title     = {{A Fast, Compact Approximation of the Exponential Function}},
  author    = {Schraudolph, Nicol N.},
  journal   = {Neural Computation},
  year      = {1999},
  pages     = {853-862},
  doi       = {10.1162/089976699300016467},
  volume    = {11},
  url       = {https://mlanthology.org/neco/1999/schraudolph1999neco-fast/}
}