Incremental Algorithms for Hierarchical Classification

Abstract

We study the problem of hierarchical classification when labels corre- sponding to partial and/or multiple paths in the underlying taxonomy are allowed. We introduce a new hierarchical loss function, the H-loss, im- plementing the simple intuition that additional mistakes in the subtree of a mistaken class should not be charged for. Based on a probabilistic data model introduced in earlier work, we derive the Bayes-optimal classifier for the H-loss. We then empirically compare two incremental approx- imations of the Bayes-optimal classifier with a flat SVM classifier and with classifiers obtained by using hierarchical versions of the Perceptron and SVM algorithms. The experiments show that our simplest incremen- tal approximation of the Bayes-optimal classifier performs, after just one training epoch, nearly as well as the hierarchical SVM classifier (which performs best). For the same incremental algorithm we also derive an H-loss bound showing, when data are generated by our probabilistic data model, exponentially fast convergence to the H-loss of the hierarchical classifier based on the true model parameters.

Cite

Text

Cesa-bianchi et al. "Incremental Algorithms for Hierarchical Classification." Neural Information Processing Systems, 2004.

Markdown

[Cesa-bianchi et al. "Incremental Algorithms for Hierarchical Classification." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/cesabianchi2004neurips-incremental/)

BibTeX

@inproceedings{cesabianchi2004neurips-incremental,
  title     = {{Incremental Algorithms for Hierarchical Classification}},
  author    = {Cesa-bianchi, Nicolò and Gentile, Claudio and Tironi, Andrea and Zaniboni, Luca},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {233-240},
  url       = {https://mlanthology.org/neurips/2004/cesabianchi2004neurips-incremental/}
}