Revisiting Output Coding for Sequential Supervised Learning
Abstract
Markov models are commonly used for joint inference of label sequences. Unfortunately, inference scales quadratically in the number of labels, which is problematic for training methods where inference is repeatedly preformed and is the primary computational bottleneck for large label sets. Recent work has used output coding to address this issue by converting a problem with many labels to a set of problems with binary labels. Models were independently trained for each binary problem, at a much reduced computational cost, and then combined for joint inference over the original labels. Here we revisit this idea and show through experiments on synthetic and benchmark data sets that the approach can perform poorly when it is critical to explicitly capture the Markovian transition structure of the large-label problem. We then describe a simple cascade-training approach and show that it can improve performance on such problems with negligible computational overhead.
Cite
Text
Hao and Fern. "Revisiting Output Coding for Sequential Supervised Learning." International Joint Conference on Artificial Intelligence, 2007.Markdown
[Hao and Fern. "Revisiting Output Coding for Sequential Supervised Learning." International Joint Conference on Artificial Intelligence, 2007.](https://mlanthology.org/ijcai/2007/hao2007ijcai-revisiting/)BibTeX
@inproceedings{hao2007ijcai-revisiting,
title = {{Revisiting Output Coding for Sequential Supervised Learning}},
author = {Hao, Guohua and Fern, Alan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2007},
pages = {2486-2491},
url = {https://mlanthology.org/ijcai/2007/hao2007ijcai-revisiting/}
}