Boosting Classifiers with Tightened L0-Relaxation Penalties
Abstract
We propose a novel boosting algorithm which improves on current algorithms for weighted voting classification in striking a better balance between classification accuracy and sparsity of the weight vector. In order to justify our optimization formulations, we first consider a novel integer linear program as a model forsparse classifier selection, generalizing the minimum disagreementhalfspace problem, whose complexity has been investigated in computational learning theory. Specifically, our mixed integer problem is that of finding a separating hyperplane with minimum empirical error subject to an L0-norm penalty. We find that common soft margin linear programming formulations for robust classification are equivalent to a continuous relaxation of our model. Since the initial continuous relaxation is weak, we suggesta tighter relaxation, using novel cutting planes, to better approximate the integer solution. To solve this relaxation, we propose a new boosting algorithm based on linear programming with dynamic generation of variables and constraints. We demonstrate the classification performance of our proposed algorithm with experimental results, and justify our selection of parameters using a minimum description length, compression intrepretation of learning.
Cite
Text
Goldberg and Eckstein. "Boosting Classifiers with Tightened L0-Relaxation Penalties." International Conference on Machine Learning, 2010.Markdown
[Goldberg and Eckstein. "Boosting Classifiers with Tightened L0-Relaxation Penalties." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/goldberg2010icml-boosting/)BibTeX
@inproceedings{goldberg2010icml-boosting,
title = {{Boosting Classifiers with Tightened L0-Relaxation Penalties}},
author = {Goldberg, Noam and Eckstein, Jonathan},
booktitle = {International Conference on Machine Learning},
year = {2010},
pages = {383-390},
url = {https://mlanthology.org/icml/2010/goldberg2010icml-boosting/}
}