An Efficient Algorithm for Learning with Semi-Bandit Feedback
Abstract
We consider the problem of online combinatorial optimization under semi-bandit feedback. The goal of the learner is to sequentially select its actions from a combinatorial decision set so as to minimize its cumulative loss. We propose a learning algorithm for this problem based on combining the Follow-the-Perturbed-Leader (FPL) prediction method with a novel loss estimation procedure called Geometric Resampling (GR). Contrary to previous solutions, the resulting algorithm can be efficiently implemented for any decision set where efficient offline combinatorial optimization is possible at all. Assuming that the elements of the decision set can be described with d-dimensional binary vectors with at most m non-zero entries, we show that the expected regret of our algorithm after T rounds is \(O(m\sqrt{dT\log d})\). As a side result, we also improve the best known regret bounds for FPL, in the full information setting to \(O(m^3/2\sqrt{T\log d})\), gaining a factor of \(\sqrt{d/m}\) over previous bounds for this algorithm.
Cite
Text
Neu and Bartók. "An Efficient Algorithm for Learning with Semi-Bandit Feedback." International Conference on Algorithmic Learning Theory, 2013. doi:10.1007/978-3-642-40935-6_17Markdown
[Neu and Bartók. "An Efficient Algorithm for Learning with Semi-Bandit Feedback." International Conference on Algorithmic Learning Theory, 2013.](https://mlanthology.org/alt/2013/neu2013alt-efficient/) doi:10.1007/978-3-642-40935-6_17BibTeX
@inproceedings{neu2013alt-efficient,
title = {{An Efficient Algorithm for Learning with Semi-Bandit Feedback}},
author = {Neu, Gergely and Bartók, Gábor},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2013},
pages = {234-248},
doi = {10.1007/978-3-642-40935-6_17},
url = {https://mlanthology.org/alt/2013/neu2013alt-efficient/}
}