Learning Low-Rank Output Kernels

Abstract

Output kernel learning techniques allow to simultaneously learn a vector-valued function and a positive semidefinite matrix which describes the relationships between the outputs. In this paper, we introduce a new formulation that imposes a low-rank constraint on the output kernel and operates directly on a factor of the kernel matrix. First, we investigate the connection between output kernel learning and a regularization problem for an architecture with two layers. Then, we show that a variety of methods such as nuclear norm regularized regression, reduced-rank regression, principal component analysis, and low rank matrix approximation can be seen as special cases of the output kernel learning framework. Finally, we introduce a block coordinate descent strategy for learning low-rank output kernels.

Cite

Text

Dinuzzo and Fukumizu. "Learning Low-Rank Output Kernels." Proceedings of the Third Asian Conference on Machine Learning, 2011.

Markdown

[Dinuzzo and Fukumizu. "Learning Low-Rank Output Kernels." Proceedings of the Third Asian Conference on Machine Learning, 2011.](https://mlanthology.org/acml/2011/dinuzzo2011acml-learning/)

BibTeX

@inproceedings{dinuzzo2011acml-learning,
  title     = {{Learning Low-Rank Output Kernels}},
  author    = {Dinuzzo, Francesco and Fukumizu, Kenji},
  booktitle = {Proceedings of the Third Asian Conference on Machine Learning},
  year      = {2011},
  pages     = {181-196},
  volume    = {20},
  url       = {https://mlanthology.org/acml/2011/dinuzzo2011acml-learning/}
}