Tight Bounds on Transition to Perfect Generalization in Perceptrons

Abstract

“Sudden” transition to perfect generalization in binary perceptrons is investigated. Building on recent theoretical works of Gardner and Derrida (1989) and Baum and Lyuu (1991), we show the following: for α > αc = 1.44797 …, if α n examples are drawn from the uniform distribution on +1, −1n and classified according to a target perceptron wt ∈ +1, −1n as positive if wt · x ≥ 0 and negative otherwise, then the expected number of nontarget perceptrons consistent with the examples is 2−⊖(√n); the same number, however, grows exponentially 2⊖(n) if α ≤ αc. Numerical calculations for n up to 1,000,002 are reported.

Cite

Text

Lyuu and Rivin. "Tight Bounds on Transition to Perfect Generalization in Perceptrons." Neural Computation, 1992. doi:10.1162/NECO.1992.4.6.854

Markdown

[Lyuu and Rivin. "Tight Bounds on Transition to Perfect Generalization in Perceptrons." Neural Computation, 1992.](https://mlanthology.org/neco/1992/lyuu1992neco-tight/) doi:10.1162/NECO.1992.4.6.854

BibTeX

@article{lyuu1992neco-tight,
  title     = {{Tight Bounds on Transition to Perfect Generalization in Perceptrons}},
  author    = {Lyuu, Yuh-Dauh and Rivin, Igor},
  journal   = {Neural Computation},
  year      = {1992},
  pages     = {854-862},
  doi       = {10.1162/NECO.1992.4.6.854},
  volume    = {4},
  url       = {https://mlanthology.org/neco/1992/lyuu1992neco-tight/}
}