Using Linear-Threshold Algorithms to Combine Multi-Class Sub-Experts
Abstract
We present a new type of multi-class learning algorithm called a linear-max algorithm. Linearmax algorithms learn with a special type of attribute called a sub-expert. A sub-expert is a vector attribute that has a value for each output class. The goal of the multi-class algorithm is to learn a linear function combining the sub-experts and to use this linear function to make correct class predictions. The main contribution of this work is to prove that, in the on-line mistake-bounded model of learning, a multi-class sub-expert learning algorithm has the same mistake bounds as a related two class linear-threshold algorithm. We apply these techniques to three linear-threshold algorithms: Perceptron, Winnow, and Romma. We show these algorithms give good performance on artificial and real datasets. ICML Proceedings of the Twentieth International Conference on Machine Learning
Cite
Text
Mesterharm. "Using Linear-Threshold Algorithms to Combine Multi-Class Sub-Experts." International Conference on Machine Learning, 2003.Markdown
[Mesterharm. "Using Linear-Threshold Algorithms to Combine Multi-Class Sub-Experts." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/mesterharm2003icml-using/)BibTeX
@inproceedings{mesterharm2003icml-using,
title = {{Using Linear-Threshold Algorithms to Combine Multi-Class Sub-Experts}},
author = {Mesterharm, Chris},
booktitle = {International Conference on Machine Learning},
year = {2003},
pages = {544-551},
url = {https://mlanthology.org/icml/2003/mesterharm2003icml-using/}
}