Rounding Methods for Discrete Linear Classification
Abstract
Learning discrete linear functions is a notoriously difficult challenge. In this paper, the learning task is cast as combinatorial optimization problem: given a set of positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of this training set. Since this problem is NP-hard, we propose two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for two important classes of binary-weighted linear functions, by establishing the Rademacher complexity of these classes and proving approximation bounds for rounding methods. These methods are compared on both synthetic and real-world data.
Cite
Text
Chevaleyre et al. "Rounding Methods for Discrete Linear Classification." International Conference on Machine Learning, 2013.Markdown
[Chevaleyre et al. "Rounding Methods for Discrete Linear Classification." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/chevaleyre2013icml-rounding/)BibTeX
@inproceedings{chevaleyre2013icml-rounding,
title = {{Rounding Methods for Discrete Linear Classification}},
author = {Chevaleyre, Yann and Koriche, Frédéerick and Zucker, Jean-daniel},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {651-659},
volume = {28},
url = {https://mlanthology.org/icml/2013/chevaleyre2013icml-rounding/}
}