Semi-Definite Programming by Perceptron Learning
Abstract
We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a prob- abilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algo- rithm works, but is not competitive with state-of-the-art interior point methods.
Cite
Text
Graepel et al. "Semi-Definite Programming by Perceptron Learning." Neural Information Processing Systems, 2003.Markdown
[Graepel et al. "Semi-Definite Programming by Perceptron Learning." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/graepel2003neurips-semidefinite/)BibTeX
@inproceedings{graepel2003neurips-semidefinite,
title = {{Semi-Definite Programming by Perceptron Learning}},
author = {Graepel, Thore and Herbrich, Ralf and Kharechko, Andriy and Shawe-taylor, John S.},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {457-464},
url = {https://mlanthology.org/neurips/2003/graepel2003neurips-semidefinite/}
}