Learning Diverse Models: The Coulomb Structured Support Vector Machine
Abstract
In structured prediction, it is standard procedure to discriminatively train a single model that is then used to make a single prediction for each input. This practice is simple but risky in many ways. For instance, models are often designed with tractability rather than faithfulness in mind. To hedge against such model misspecification, it may be useful to train multiple models that all are a reasonable fit to the training data, but at least one of which may hopefully make more valid predictions than the single model in standard procedure. We propose the Coulomb Structured SVM (CSSVM) as a means to obtain at training time a full ensemble of different models. At test time, these models can run in parallel and independently to make diverse predictions. We demonstrate on challenging tasks from computer vision that some of these diverse predictions have significantly lower task loss than that of a single model, and improve over state-of-the-art diversity encouraging approaches.
Cite
Text
Schiegg et al. "Learning Diverse Models: The Coulomb Structured Support Vector Machine." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46487-9_36Markdown
[Schiegg et al. "Learning Diverse Models: The Coulomb Structured Support Vector Machine." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/schiegg2016eccv-learning/) doi:10.1007/978-3-319-46487-9_36BibTeX
@inproceedings{schiegg2016eccv-learning,
title = {{Learning Diverse Models: The Coulomb Structured Support Vector Machine}},
author = {Schiegg, Martin and Diego, Ferran and Hamprecht, Fred A.},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {585-599},
doi = {10.1007/978-3-319-46487-9_36},
url = {https://mlanthology.org/eccv/2016/schiegg2016eccv-learning/}
}