On the Complexity of A/B Testing
Abstract
A/B testing refers to the task of determining the best option among two alternatives that yield random outcomes. We provide distribution-dependent lower bounds for the performance of A/B testing that improve over the results currently available both in the fixed-confidence (or delta-PAC) and fixed-budget settings. When the distribution of the outcomes are Gaussian, we prove that the complexity of the fixed-confidence and fixed-budget settings are equivalent, and that uniform sampling of both alternatives is optimal only in the case of equal variances. In the common variance case, we also provide a stopping rule that terminates faster than existing fixed-confidence algorithms. In the case of Bernoulli distributions, we show that the complexity of fixed-budget setting is smaller than that of fixed-confidence setting and that uniform sampling of both alternatives -though not optimal- is advisable in practice when combined with an appropriate stopping criterion.
Cite
Text
Kaufmann et al. "On the Complexity of A/B Testing." Annual Conference on Computational Learning Theory, 2014.Markdown
[Kaufmann et al. "On the Complexity of A/B Testing." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/kaufmann2014colt-complexity/)BibTeX
@inproceedings{kaufmann2014colt-complexity,
title = {{On the Complexity of A/B Testing}},
author = {Kaufmann, Emilie and Cappé, Olivier and Garivier, Aurélien},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2014},
pages = {461-481},
url = {https://mlanthology.org/colt/2014/kaufmann2014colt-complexity/}
}