Approximate Function Evaluation via Multi-Armed Bandits
Abstract
We study the problem of estimating the value of a known smooth function f at an unknown point $\mu \in \mathbb{R}^n$, where each component $\mu_i$ can be sampled via a noisy oracle. Sampling more frequently components of $\mu$ corresponding to directions of the function with larger directional derivatives is more sample-efficient. However, as $\mu$ is unknown, the optimal sampling frequencies are also unknown. We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-\delta$ returns an $\epsilon$ accurate estimate of $f(\mu)$. We generalize our algorithm to adapt to heteroskedastic noise, and prove asymptotic optimality when f is linear. We corroborate our theoretical results with numerical experiments, showing the dramatic gains afforded by adaptivity.
Cite
Text
Baharav et al. "Approximate Function Evaluation via Multi-Armed Bandits." Artificial Intelligence and Statistics, 2022.Markdown
[Baharav et al. "Approximate Function Evaluation via Multi-Armed Bandits." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/baharav2022aistats-approximate/)BibTeX
@inproceedings{baharav2022aistats-approximate,
title = {{Approximate Function Evaluation via Multi-Armed Bandits}},
author = {Baharav, Tavor Z. and Cheng, Gary and Pilanci, Mert and Tse, David},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {108-135},
volume = {151},
url = {https://mlanthology.org/aistats/2022/baharav2022aistats-approximate/}
}