QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization
Abstract
We introduce a novel discrete optimization method for training in the context of a boosting framework for large scale binary classifiers. The motivation is to cast the training problem into the format required by existing adiabatic quantum hardware. First we provide theoretical arguments concerning the transformation of an originally continuous optimization problem into one with discrete variables of low bit depth. Next we propose QBoost as an iterative training algorithm in which a subset of weak classifiers is selected by solving a hard optimization problem in each iteration. A strong classifier is incrementally constructed by concatenating the subsets of weak classifiers. We supplement the findings with experiments on one synthetic and two natural data sets and compare against the performance of existing boosting algorithms. Finally, by conducting a quantum Monte Carlo simulation we gather evidence that adiabatic quantum optimization is able to handle the discrete optimization problems generated by QBoost.
Cite
Text
Neven et al. "QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.Markdown
[Neven et al. "QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.](https://mlanthology.org/acml/2012/neven2012acml-qboost/)BibTeX
@inproceedings{neven2012acml-qboost,
title = {{QBoost: Large Scale Classifier Training withAdiabatic Quantum Optimization}},
author = {Neven, Hartmut and Denchev, Vasil S. and Rose, Geordie and Macready, William G.},
booktitle = {Proceedings of the Fourth Asian Conference on Machine Learning},
year = {2012},
pages = {333-348},
volume = {25},
url = {https://mlanthology.org/acml/2012/neven2012acml-qboost/}
}