Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

Abstract

Invariance (defined in a general sense) has been one of the most effective priors for representation learning. Direct factorization of parametric models is feasible only for a small range of invariances, while regularization approaches, despite improved generality, lead to nonconvex optimization. In this work, we develop a \emph{convex} representation learning algorithm for a variety of generalized invariances that can be modeled as semi-norms. Novel Euclidean embeddings are introduced for kernel representers in a semi-inner-product space, and approximation bounds are established. This allows invariant representations to be learned efficiently and effectively as confirmed in our experiments, along with accurate predictions.

Cite

Text

Ma et al. "Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space." International Conference on Machine Learning, 2020.

Markdown

[Ma et al. "Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/ma2020icml-convex/)

BibTeX

@inproceedings{ma2020icml-convex,
  title     = {{Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space}},
  author    = {Ma, Yingyi and Ganapathiraman, Vignesh and Yu, Yaoliang and Zhang, Xinhua},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {6532-6542},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/ma2020icml-convex/}
}