The Power of Batching in Multiple Hypothesis Testing
Abstract
One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms. The first generally achieve significantly higher power of discovery, while the latter allow making decisions sequentially as well as adaptively formulating hypotheses based on past observations. Using existing methodology, it is unclear how one could trade off the benefits of these two broad families of algorithms, all the while preserving their formal FDR guarantees. To this end, we introduce Batch-BH and Batch-St-BH, algorithms for controlling the FDR when a possibly infinite sequence of batches of hypotheses is tested by repeated application of one of the most widely used offline algorithms, the Benjamini-Hochberg (BH) method or Storey’s improvement of the BH method. We show that our algorithms interpolate between existing online and offline methodology, thus trading off the best of both worlds.
Cite
Text
Zrnic et al. "The Power of Batching in Multiple Hypothesis Testing." Artificial Intelligence and Statistics, 2020.Markdown
[Zrnic et al. "The Power of Batching in Multiple Hypothesis Testing." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/zrnic2020aistats-power/)BibTeX
@inproceedings{zrnic2020aistats-power,
title = {{The Power of Batching in Multiple Hypothesis Testing}},
author = {Zrnic, Tijana and Jiang, Daniel and Ramdas, Aaditya and Jordan, Michael},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {3806-3815},
volume = {108},
url = {https://mlanthology.org/aistats/2020/zrnic2020aistats-power/}
}