Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior

Abstract

We revisit Rahimi and Recht (2007)’s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory. While the primary goal of RFF is to approximate a kernel, we look at the Fourier transform as a prior distribution over trigonometric hypotheses. It naturally suggests learning a posterior on these hypotheses. We derive generalization bounds that are optimized by learning a pseudo-posterior obtained from a closed-form expression. Based on this study, we consider two learning strategies: The first one finds a compact landmarks-based representation of the data where each landmark is given by a distribution-tailored similarity measure, while the second one provides a PAC-Bayesian justification to the kernel alignment method of Sinha and Duchi (2016).

Cite

Text

Letarte et al. "Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior." Artificial Intelligence and Statistics, 2019.

Markdown

[Letarte et al. "Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/letarte2019aistats-pseudobayesian/)

BibTeX

@inproceedings{letarte2019aistats-pseudobayesian,
  title     = {{Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior}},
  author    = {Letarte, Gaël and Morvant, Emilie and Germain, Pascal},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {768-776},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/letarte2019aistats-pseudobayesian/}
}