Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss

Abstract

We consider latent structural versions of probit loss and ramp loss. We show that these surrogate loss functions are consistent in the strong sense that for any feature map (finite or infinite dimensional) they yield predictors approaching the infimum task loss achievable by any linear predictor over the given features. We also give finite sample generalization bounds (convergence rates) for these loss functions. These bounds suggest that probit loss converges more rapidly. However, ramp loss is more easily optimized and may ultimately be more practical.

Cite

Text

Keshet and McAllester. "Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss." Neural Information Processing Systems, 2011.

Markdown

[Keshet and McAllester. "Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/keshet2011neurips-generalization/)

BibTeX

@inproceedings{keshet2011neurips-generalization,
  title     = {{Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss}},
  author    = {Keshet, Joseph and McAllester, David A.},
  booktitle = {Neural Information Processing Systems},
  year      = {2011},
  pages     = {2205-2212},
  url       = {https://mlanthology.org/neurips/2011/keshet2011neurips-generalization/}
}