Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees
Abstract
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential-tail bound. We also provide new PAC-Bayes finite-sample guarantees when training data is available. Our “minimax” generalization bounds are dimensionality-independent and \mathcal{O}(\sqrt{1}/m) for m samples.
Cite
Text
Honorio and Jaakkola. "Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees." International Conference on Artificial Intelligence and Statistics, 2014.Markdown
[Honorio and Jaakkola. "Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees." International Conference on Artificial Intelligence and Statistics, 2014.](https://mlanthology.org/aistats/2014/honorio2014aistats-tight/)BibTeX
@inproceedings{honorio2014aistats-tight,
title = {{Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees}},
author = {Honorio, Jean and Jaakkola, Tommi S.},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2014},
pages = {384-392},
url = {https://mlanthology.org/aistats/2014/honorio2014aistats-tight/}
}