Adaptive Learning of Density Ratios in RKHS

Abstract

Estimating the ratio of two probability densities from finitely many observations of the densities is a central problem in machine learning and statistics with applications in two-sample testing, divergence estimation, generative modeling, covariate shift adaptation, conditional density estimation, and novelty detection. In this work, we analyze a large class of density ratio estimation methods that minimize a regularized Bregman divergence between the true density ratio and a model in a reproducing kernel Hilbert space (RKHS). We derive new finite-sample error bounds, and we propose a Lepskii type parameter choice principle that minimizes the bounds without knowledge of the regularity of the density ratio. In the special case of square loss, our method adaptively achieves a minimax optimal error rate. A numerical illustration is provided.

Cite

Text

Zellinger et al. "Adaptive Learning of Density Ratios in RKHS." Journal of Machine Learning Research, 2023.

Markdown

[Zellinger et al. "Adaptive Learning of Density Ratios in RKHS." Journal of Machine Learning Research, 2023.](https://mlanthology.org/jmlr/2023/zellinger2023jmlr-adaptive/)

BibTeX

@article{zellinger2023jmlr-adaptive,
  title     = {{Adaptive Learning of Density Ratios in RKHS}},
  author    = {Zellinger, Werner and Kindermann, Stefan and Pereverzyev, Sergei V.},
  journal   = {Journal of Machine Learning Research},
  year      = {2023},
  pages     = {1-28},
  volume    = {24},
  url       = {https://mlanthology.org/jmlr/2023/zellinger2023jmlr-adaptive/}
}