A Bayesian Divergence Prior for Classiffier Adaptation

Abstract

Adaptation of statistical classifiers is critical when a target (or testing) distribution is different from the distribution that governs training data. In such cases, a classifier optimized for the training distribution needs to be adapted for optimal use in the target distribution. This paper presents a Bayesian “divergence prior” for generic classifier adaptation. Instantiations of this prior lead to simple yet principled adaptation strategies for a variety of classifiers, which yield superior performance in practice. In addition, this paper derives several adaptation error bounds by applying the divergence prior in the PAC-Bayesian setting.

Cite

Text

Li and Bilmes. "A Bayesian Divergence Prior for Classiffier Adaptation." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Li and Bilmes. "A Bayesian Divergence Prior for Classiffier Adaptation." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/li2007aistats-bayesian/)

BibTeX

@inproceedings{li2007aistats-bayesian,
  title     = {{A Bayesian Divergence Prior for Classiffier Adaptation}},
  author    = {Li, Xiao and Bilmes, Jeff},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {275-282},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/li2007aistats-bayesian/}
}