Does One-Against-All or One-Against-One Improve the Performance of Multiclass Classifications?
Abstract
One-against-all and one-against-one are two popular methodologies for reducing multiclass classification problems into a set of binary classifications. In this paper, we are interested in the performance of both one-against-all and one-against-one for classification algorithms, such as decision tree, naïve bayes, support vector machine, and logistic regression. Since both one-against-all and one-against-one work like creating a classification committee, they are expected to improve the performance of classification algorithms. However, our experimental results surprisingly show that one-against-all worsens the performance of the algorithms on most datasets. One-against-one helps, but performs worse than the same iterations of bagging these algorithms. Thus, we conclude that both one-against-all and one-against-one should not be used for the algorithms that can perform multiclass classifications directly. Bagging is better approach for improving their performance.
Cite
Text
Eichelberger and Sheng. "Does One-Against-All or One-Against-One Improve the Performance of Multiclass Classifications?." AAAI Conference on Artificial Intelligence, 2013. doi:10.1609/AAAI.V27I1.8522Markdown
[Eichelberger and Sheng. "Does One-Against-All or One-Against-One Improve the Performance of Multiclass Classifications?." AAAI Conference on Artificial Intelligence, 2013.](https://mlanthology.org/aaai/2013/eichelberger2013aaai-one/) doi:10.1609/AAAI.V27I1.8522BibTeX
@inproceedings{eichelberger2013aaai-one,
title = {{Does One-Against-All or One-Against-One Improve the Performance of Multiclass Classifications?}},
author = {Eichelberger, Robert Kyle and Sheng, Victor S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2013},
pages = {1609-1610},
doi = {10.1609/AAAI.V27I1.8522},
url = {https://mlanthology.org/aaai/2013/eichelberger2013aaai-one/}
}