Low-Rank Kernel Learning with Bregman Matrix Divergences

Abstract

In this paper, we study low-rank matrix nearness problems, with a focus on learning low-rank positive semidefinite (kernel) matrices for machine learning applications. We propose efficient algorithms that scale linearly in the number of data points and quadratically in the rank of the input matrix. Existing algorithms for learning kernel matrices often scale poorly, with running times that are cubic in the number of data points. We employ Bregman matrix divergences as the measures of nearness---these divergences are natural for learning low-rank kernels since they preserve rank as well as positive semidefiniteness. Special cases of our framework yield faster algorithms for various existing learning problems, and experimental results demonstrate that our algorithms can effectively learn both low-rank and full-rank kernel matrices.

Cite

Text

Kulis et al. "Low-Rank Kernel Learning with Bregman Matrix Divergences." Journal of Machine Learning Research, 2009.

Markdown

[Kulis et al. "Low-Rank Kernel Learning with Bregman Matrix Divergences." Journal of Machine Learning Research, 2009.](https://mlanthology.org/jmlr/2009/kulis2009jmlr-lowrank/)

BibTeX

@article{kulis2009jmlr-lowrank,
  title     = {{Low-Rank Kernel Learning with Bregman Matrix Divergences}},
  author    = {Kulis, Brian and Sustik, Mátyás A. and Dhillon, Inderjit S.},
  journal   = {Journal of Machine Learning Research},
  year      = {2009},
  pages     = {341-376},
  volume    = {10},
  url       = {https://mlanthology.org/jmlr/2009/kulis2009jmlr-lowrank/}
}