Multi-Task Feature and Kernel Selection for SVMs
Abstract
We compute a common feature selection or kernel selection configuration formultiple support vector machines (SVMs) trained on different yet inter-relateddatasets. The method is advantageous when multiple classification tasks anddifferently labeled datasets exist over a common input space. Differentdatasets can mutually reinforce a common choice of representation or relevantfeatures for their various classifiers. We derive a multi-task representationlearning approach using the maximum entropy discrimination formalism. Theresulting convex algorithms maintain the global solution properties of supportvector machines. However, in addition to multiple SVMclassification/regression parameters they also jointly estimate an optimalsubset of features or optimal combination of kernels. Experiments are shown onstandardized datasets.
Cite
Text
Jebara. "Multi-Task Feature and Kernel Selection for SVMs." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015426Markdown
[Jebara. "Multi-Task Feature and Kernel Selection for SVMs." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/jebara2004icml-multi/) doi:10.1145/1015330.1015426BibTeX
@inproceedings{jebara2004icml-multi,
title = {{Multi-Task Feature and Kernel Selection for SVMs}},
author = {Jebara, Tony},
booktitle = {International Conference on Machine Learning},
year = {2004},
doi = {10.1145/1015330.1015426},
url = {https://mlanthology.org/icml/2004/jebara2004icml-multi/}
}