Consistent Multilabel Ranking Through Univariate Losses
Abstract
We consider the problem of rank loss minimization in the setting of multilabel classification, which is usually tackled by means of convex surrogate losses defined on pairs of labels. Very recently, this approach was put into question by a negative result showing that commonly used pairwise surrogate losses, such as exponential and logistic losses, are inconsistent. In this paper, we show a positive result which is arguably surprising in light of the previous one: the simpler univariate variants of exponential and logistic surrogates (i.e., defined on single labels) are consistent for rank loss minimization. Instead of directly proving convergence, we give a much stronger result by deriving regret bounds and convergence rates. The proposed losses suggest efficient and scalable algorithms, which are tested experimentally.
Cite
Text
Dembczynski et al. "Consistent Multilabel Ranking Through Univariate Losses." International Conference on Machine Learning, 2012.Markdown
[Dembczynski et al. "Consistent Multilabel Ranking Through Univariate Losses." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/dembczynski2012icml-consistent/)BibTeX
@inproceedings{dembczynski2012icml-consistent,
title = {{Consistent Multilabel Ranking Through Univariate Losses}},
author = {Dembczynski, Krzysztof and Kotlowski, Wojciech and Hüllermeier, Eyke},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/dembczynski2012icml-consistent/}
}