Sharp Generalization Error Bounds for Randomly-Projected Classifiers

Abstract

We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds.

Cite

Text

Durrant and Kaban. "Sharp Generalization Error Bounds for Randomly-Projected Classifiers." International Conference on Machine Learning, 2013.

Markdown

[Durrant and Kaban. "Sharp Generalization Error Bounds for Randomly-Projected Classifiers." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/durrant2013icml-sharp/)

BibTeX

@inproceedings{durrant2013icml-sharp,
  title     = {{Sharp Generalization Error Bounds for Randomly-Projected Classifiers}},
  author    = {Durrant, Robert and Kaban, Ata},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {693-701},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/durrant2013icml-sharp/}
}