Learning Local Invariant Mahalanobis Distances
Abstract
For many tasks and data types, there are natural transformations to which the data should be invariant or insensitive. For instance, in visual recognition, natural images should be insensitive to rotation and translation. This requirement and its implications have been important in many machine learning applications, and tolerance for image transformations was primarily achieved by using robust feature vectors. In this paper we propose a novel and computationally efficient way to learn a local Mahalanobis metric per datum, and show how we can learn a local invariant metric to any transformation in order to improve performance.
Cite
Text
Fetaya and Ullman. "Learning Local Invariant Mahalanobis Distances." International Conference on Machine Learning, 2015.Markdown
[Fetaya and Ullman. "Learning Local Invariant Mahalanobis Distances." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/fetaya2015icml-learning/)BibTeX
@inproceedings{fetaya2015icml-learning,
title = {{Learning Local Invariant Mahalanobis Distances}},
author = {Fetaya, Ethan and Ullman, Shimon},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {162-168},
volume = {37},
url = {https://mlanthology.org/icml/2015/fetaya2015icml-learning/}
}