Learning Invariant Representations with Kernel Warping

Abstract

Invariance is an effective prior that has been extensively used to bias supervised learning with a \emph{given} representation of data. In order to learn invariant representations, wavelet and scattering based methods “hard code” invariance over the \emph{entire} sample space, hence restricted to a limited range of transformations. Kernels based on Haar integration also work only on a \emph{group} of transformations. In this work, we break this limitation by designing a new representation learning algorithm that incorporates invariances \emph{beyond transformation}. Our approach, which is based on warping the kernel in a data-dependent fashion, is computationally efficient using random features, and leads to a deep kernel through multiple layers. We apply it to convolutional kernel networks and demonstrate its stability.

Cite

Text

Ma et al. "Learning Invariant Representations with Kernel Warping." Artificial Intelligence and Statistics, 2019.

Markdown

[Ma et al. "Learning Invariant Representations with Kernel Warping." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/ma2019aistats-learning/)

BibTeX

@inproceedings{ma2019aistats-learning,
  title     = {{Learning Invariant Representations with Kernel Warping}},
  author    = {Ma, Yingyi and Ganapathiraman, Vignesh and Zhang, Xinhua},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {1003-1012},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/ma2019aistats-learning/}
}