ActiveHedge: Hedge Meets Active Learning
Abstract
We consider the classical problem of multi-class prediction with expert advice, but with an active learning twist. In this new setting the learner will only query the labels of a small number of examples, but still aims to minimize regret to the best expert as usual; the learner is also allowed a very short "burn-in" phase where it can fast-forward and query certain highly-informative examples. We design an algorithm that utilizes Hedge (aka Exponential Weights) as a subroutine, and we show that under a very particular combinatorial constraint on the matrix of expert predictions we can obtain a very strong regret guarantee while querying very few labels. This constraint, which we refer to as $\zeta$-compactness, or just compactness, can be viewed as a non-stochastic variant of the disagreement coefficient, another popular parameter used to reason about the sample complexity of active learning in the IID setting. We also give a polynomial-time algorithm to calculate the $\zeta$-compactness of a matrix up to an approximation factor of 3.
Cite
Text
Kumar et al. "ActiveHedge: Hedge Meets Active Learning." International Conference on Machine Learning, 2022.Markdown
[Kumar et al. "ActiveHedge: Hedge Meets Active Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/kumar2022icml-activehedge/)BibTeX
@inproceedings{kumar2022icml-activehedge,
title = {{ActiveHedge: Hedge Meets Active Learning}},
author = {Kumar, Bhuvesh and Abernethy, Jacob D and Saligrama, Venkatesh},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {11694-11709},
volume = {162},
url = {https://mlanthology.org/icml/2022/kumar2022icml-activehedge/}
}