Empirical Comparison of Multi-Label Classification Algorithms

Abstract

Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost.MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of the correlations among multi-labels. These insights shed light for future research directions on multi-label classifications.

Cite

Text

Tawiah and Sheng. "Empirical Comparison of Multi-Label Classification Algorithms." AAAI Conference on Artificial Intelligence, 2013. doi:10.1609/AAAI.V27I1.8521

Markdown

[Tawiah and Sheng. "Empirical Comparison of Multi-Label Classification Algorithms." AAAI Conference on Artificial Intelligence, 2013.](https://mlanthology.org/aaai/2013/tawiah2013aaai-empirical/) doi:10.1609/AAAI.V27I1.8521

BibTeX

@inproceedings{tawiah2013aaai-empirical,
  title     = {{Empirical Comparison of Multi-Label Classification Algorithms}},
  author    = {Tawiah, Clifford A. and Sheng, Victor S.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2013},
  pages     = {1645-1646},
  doi       = {10.1609/AAAI.V27I1.8521},
  url       = {https://mlanthology.org/aaai/2013/tawiah2013aaai-empirical/}
}