Metric Learning for Kernel Regression

Abstract

Kernel regression is a well-established method for nonlinear regression in which the target value for a test point is estimated using a weighted average of the surrounding training samples. The weights are typically obtained by applying a distance-based kernel function to each of the samples, which presumes the existence of a well-defined distance metric. In this paper, we construct a novel algorithm for supervised metric learning, which learns a distance function by directly minimizing the leave-one-out regression error. We show that our algorithm makes kernel regression comparable with the state of the art on several benchmark datasets, and we provide efficient implementation details enabling application to datasets with $\sim O$(10k) instances. Further, we show that our algorithm can be viewed as a supervised variation of PCA and can be used for dimensionality reduction and high dimensional data visualization.

Cite

Text

Weinberger and Tesauro. "Metric Learning for Kernel Regression." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Weinberger and Tesauro. "Metric Learning for Kernel Regression." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/weinberger2007aistats-metric/)

BibTeX

@inproceedings{weinberger2007aistats-metric,
  title     = {{Metric Learning for Kernel Regression}},
  author    = {Weinberger, Kilian Q. and Tesauro, Gerald},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {612-619},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/weinberger2007aistats-metric/}
}