Black Box Quantiles for Kernel Learning

Abstract

Kernel methods have been successfully used in various domains to model nonlinear patterns. However, the structure of the kernels is typically handcrafted for each dataset based on the experience of the data analyst. In this paper, we present a novel technique to learn kernels that best fit the data. We exploit the measure-theoretic view of a shift-invariant kernel given by the Bochner’s theorem, and automatically learn the measure in terms of a parameterized quantile function. This flexible black box quantile function, evaluated on Quasi-Monte Carlo samples, builds up quasi-random Fourier feature maps that can approximate arbitrary kernels. The proposed method is not only general enough to be used in any kernel machine, but can also be combined with other kernel design techniques. We learn expressive kernels on a variety of datasets, verifying the methods ability to automatically discover complex patterns without being guided by human expert knowledge.

Cite

Text

Tompkins et al. "Black Box Quantiles for Kernel Learning." Artificial Intelligence and Statistics, 2019.

Markdown

[Tompkins et al. "Black Box Quantiles for Kernel Learning." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/tompkins2019aistats-black/)

BibTeX

@inproceedings{tompkins2019aistats-black,
  title     = {{Black Box Quantiles for Kernel Learning}},
  author    = {Tompkins, Anthony and Senanayake, Ransalu and Morere, Philippe and Ramos, Fabio},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {1427-1437},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/tompkins2019aistats-black/}
}