Efficient Optimal Learning for Contextual Bandits
Abstract
We address the problem of learning in an online setting where the learner repeatedly observes features, selects among a set of actions, and receives reward for the action taken. We provide the first efficient algorithm with an optimal regret. Our algorithm uses a cost sensitive classification learner as an oracle and has a running time polylog(N), where N is the number of classification rules among which the oracle might choose. This is exponentially faster than all previous algorithms that achieve optimal regret in this setting. Our formulation also enables us to create an algorithm with regret that is additive rather than multiplicative in feedback delay as in all previous work.
Cite
Text
Dudík et al. "Efficient Optimal Learning for Contextual Bandits." Conference on Uncertainty in Artificial Intelligence, 2011.Markdown
[Dudík et al. "Efficient Optimal Learning for Contextual Bandits." Conference on Uncertainty in Artificial Intelligence, 2011.](https://mlanthology.org/uai/2011/dudik2011uai-efficient/)BibTeX
@inproceedings{dudik2011uai-efficient,
title = {{Efficient Optimal Learning for Contextual Bandits}},
author = {Dudík, Miroslav and Hsu, Daniel J. and Kale, Satyen and Karampatziakis, Nikos and Langford, John and Reyzin, Lev and Zhang, Tong},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2011},
pages = {169-178},
url = {https://mlanthology.org/uai/2011/dudik2011uai-efficient/}
}