Learning to Predict One or More Ranks in Ordinal Regression Tasks

Abstract

We present nondeterministic hypotheses learned from an ordinal regression task. They try to predict the true rank for an entry, but when the classification is uncertain the hypotheses predict a set of consecutive ranks (an interval). The aim is to keep the set of ranks as small as possible, while still containing the true rank. The justification for learning such a hypothesis is based on a real world problem arisen in breeding beef cattle. After defining a family of loss functions inspired in Information Retrieval, we derive an algorithm for minimizing them. The algorithm is based on posterior probabilities of ranks given an entry. A couple of implementations are compared: one based on a multiclass SVM and other based on Gaussian processes designed to minimize the linear loss in ordinal regression tasks.

Cite

Text

Alonso et al. "Learning to Predict One or More Ranks in Ordinal Regression Tasks." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008. doi:10.1007/978-3-540-87479-9_21

Markdown

[Alonso et al. "Learning to Predict One or More Ranks in Ordinal Regression Tasks." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008.](https://mlanthology.org/ecmlpkdd/2008/alonso2008ecmlpkdd-learning/) doi:10.1007/978-3-540-87479-9_21

BibTeX

@inproceedings{alonso2008ecmlpkdd-learning,
  title     = {{Learning to Predict One or More Ranks in Ordinal Regression Tasks}},
  author    = {Alonso, Jaime and del Coz, Juan José and Díez, Jorge and Luaces, Oscar and Bahamonde, Antonio},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2008},
  pages     = {39-54},
  doi       = {10.1007/978-3-540-87479-9_21},
  url       = {https://mlanthology.org/ecmlpkdd/2008/alonso2008ecmlpkdd-learning/}
}