Robustness of Bayesian Pool-Based Active Learning Against Prior Misspecification
Abstract
We study the robustness of active learning (AL) algorithms against prior misspecification: whether an algorithm achieves similar performance using a perturbed prior as compared to using the true prior. In both the average and worst cases of the maximum coverage setting, we prove that all alpha-approximate algorithms are robust (i.e., near alpha-approximate) if the utility is Lipschitz continuous in the prior. We further show that robustness may not be achieved if the utility is non-Lipschitz. This suggests we should use a Lipschitz utility for AL if robustness is required. For the minimum cost setting, we can also obtain a robustness result for approximate AL algorithms. Our results imply that many commonly used AL algorithms are robust against perturbed priors. We then propose the use of a mixture prior to alleviate the problem of prior misspecification. We analyze the robustness of the uniform mixture prior and show experimentally that it performs reasonably well in practice.
Cite
Text
Cuong et al. "Robustness of Bayesian Pool-Based Active Learning Against Prior Misspecification." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10233Markdown
[Cuong et al. "Robustness of Bayesian Pool-Based Active Learning Against Prior Misspecification." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/cuong2016aaai-robustness/) doi:10.1609/AAAI.V30I1.10233BibTeX
@inproceedings{cuong2016aaai-robustness,
title = {{Robustness of Bayesian Pool-Based Active Learning Against Prior Misspecification}},
author = {Cuong, Nguyen Viet and Ye, Nan and Lee, Wee Sun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {1512-1518},
doi = {10.1609/AAAI.V30I1.10233},
url = {https://mlanthology.org/aaai/2016/cuong2016aaai-robustness/}
}