An Information-Theoretic Lower Bound in Time-Uniform Estimation
Abstract
We present an information-theoretic lower bound for the problem of parameter estimation with time-uniform coverage guarantees. We use a reduction to sequential testing to obtain stronger lower bounds that capture the hardness of the time-uniform setting. In the case of location model estimation and logistic regression, our lower bound is $\Omega(\sqrt{n^{-1}\log \log n})$, which is tight up to constant factors in typical settings.
Cite
Text
Duchi and Haque. "An Information-Theoretic Lower Bound in Time-Uniform Estimation." Conference on Learning Theory, 2024.Markdown
[Duchi and Haque. "An Information-Theoretic Lower Bound in Time-Uniform Estimation." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/duchi2024colt-informationtheoretic/)BibTeX
@inproceedings{duchi2024colt-informationtheoretic,
title = {{An Information-Theoretic Lower Bound in Time-Uniform Estimation}},
author = {Duchi, John and Haque, Saminul},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {1486-1500},
volume = {247},
url = {https://mlanthology.org/colt/2024/duchi2024colt-informationtheoretic/}
}