A Lower Bound for Linear and Kernel Regression with Adaptive Covariates
Abstract
We prove that the continuous time version of the concentration bounds by Abbasi-Yadkori et al. (2011) for adaptive linear regression cannot be improved in general, showing that there can be a significant price for sequential design. This resolves the continuous time version of the COLT open problem by Vakili et al. (2021b) on confidence intervals for kernel regression with sequential designs. Experimental evidence suggests that improved confidence bounds are also not possible in discrete time.
Cite
Text
Lattimore. "A Lower Bound for Linear and Kernel Regression with Adaptive Covariates." Conference on Learning Theory, 2023.Markdown
[Lattimore. "A Lower Bound for Linear and Kernel Regression with Adaptive Covariates." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/lattimore2023colt-lower/)BibTeX
@inproceedings{lattimore2023colt-lower,
title = {{A Lower Bound for Linear and Kernel Regression with Adaptive Covariates}},
author = {Lattimore, Tor},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {2095-2113},
volume = {195},
url = {https://mlanthology.org/colt/2023/lattimore2023colt-lower/}
}