On the Complexity of Proper Distribution-Free Learning of Linear Classifiers
Abstract
For proper distribution-free learning of linear classifiers in $d$ dimensions from $m$ examples, we prove a lower bound on the optimal expected error of $\frac{d - o(1)}{m}$, improving on the best previous lower bound of $\frac{d/\sqrt{e} - o(1)}{m}$, and nearly matching a $\frac{d+1}{m+1}$ upper bound achieved by the linear support vector machine.
Cite
Text
Long and Long. "On the Complexity of Proper Distribution-Free Learning of Linear Classifiers." Proceedings of the 31st International Conference on Algorithmic Learning Theory, 2020.Markdown
[Long and Long. "On the Complexity of Proper Distribution-Free Learning of Linear Classifiers." Proceedings of the 31st International Conference on Algorithmic Learning Theory, 2020.](https://mlanthology.org/alt/2020/long2020alt-complexity/)BibTeX
@inproceedings{long2020alt-complexity,
title = {{On the Complexity of Proper Distribution-Free Learning of Linear Classifiers}},
author = {Long, Philip M. and Long, Raphael J.},
booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory},
year = {2020},
pages = {583-591},
volume = {117},
url = {https://mlanthology.org/alt/2020/long2020alt-complexity/}
}