Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization

Abstract

Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size ( pn ) is quadratically larger than the given sample size ( p + n ). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p + n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p + n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.

Cite

Text

Suehiro et al. "Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization." International Conference on Algorithmic Learning Theory, 2011. doi:10.1007/978-3-642-24412-4_26

Markdown

[Suehiro et al. "Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization." International Conference on Algorithmic Learning Theory, 2011.](https://mlanthology.org/alt/2011/suehiro2011alt-approximate/) doi:10.1007/978-3-642-24412-4_26

BibTeX

@inproceedings{suehiro2011alt-approximate,
  title     = {{Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization}},
  author    = {Suehiro, Daiki and Hatano, Kohei and Takimoto, Eiji},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2011},
  pages     = {324-337},
  doi       = {10.1007/978-3-642-24412-4_26},
  url       = {https://mlanthology.org/alt/2011/suehiro2011alt-approximate/}
}