A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere

Abstract

We show a simple perceptron-like algorithm to learn origin-centered halfspaces in $\mathbb{R}^n$ with accuracy $1-\epsilon$ and confidence $1-\delta$ in time $\mathcal{O}\left(\frac{n^2}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ using $\mathcal{O}\left(\frac{n}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ labeled examples drawn uniformly from the unit $n$-sphere. This improves upon algorithms given in Baum(1990), Long(1994) and Servedio(1999). The time and sample complexity of our algorithm match the lower bounds given in Long(1995) up to logarithmic factors.

Cite

Text

Schmalhofer. "A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere." Conference on Learning Theory, 2020.

Markdown

[Schmalhofer. "A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/schmalhofer2020colt-nearly/)

BibTeX

@inproceedings{schmalhofer2020colt-nearly,
  title     = {{A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere}},
  author    = {Schmalhofer, Marco},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {3285-3295},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/schmalhofer2020colt-nearly/}
}