Chernoff Sampling for Active Testing and Extension to Active Regression
Abstract
Active learning can reduce the number of samples needed to perform a hypothesis test and to estimate the parameters of a model. In this paper, we revisit the work of Chernoff that described an asymptotically optimal algorithm for performing a hypothesis test. We obtain a novel sample complexity bound for Chernoff’s algorithm, with a non-asymptotic term that characterizes its performance at a fixed confidence level. We also develop an extension of Chernoff sampling that can be used to estimate the parameters of a wide variety of models and we obtain a non-asymptotic bound on the estimation error. We apply our extension of Chernoff sampling to actively learn neural network models and to estimate parameters in real-data linear and non-linear regression problems, where our approach performs favorably to state-of-the-art methods.
Cite
Text
Mukherjee et al. "Chernoff Sampling for Active Testing and Extension to Active Regression." Artificial Intelligence and Statistics, 2022.Markdown
[Mukherjee et al. "Chernoff Sampling for Active Testing and Extension to Active Regression." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/mukherjee2022aistats-chernoff/)BibTeX
@inproceedings{mukherjee2022aistats-chernoff,
title = {{Chernoff Sampling for Active Testing and Extension to Active Regression}},
author = {Mukherjee, Subhojyoti and Tripathy, Ardhendu S. and Nowak, Robert},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {7384-7432},
volume = {151},
url = {https://mlanthology.org/aistats/2022/mukherjee2022aistats-chernoff/}
}