Scaling up Ordinal Embedding: A Landmark Approach
Abstract
Ordinal Embedding is the problem of placing n objects into R^d to satisfy constraints like "object a is closer to b than to c." It can accommodate data that embeddings from features or distances cannot, but is a more difficult problem. We propose a novel landmark-based method as a partial solution. At small to medium scales, we present a novel combination of existing methods with some new theoretical justification. For very large values of n optimizing over an entire embedding breaks down, so we propose a novel method which first embeds a subset of m << n objects and then embeds the remaining objects independently and in parallel. We prove a distance error bound for our method in terms of m and that it has O(dn log m) time complexity, and show empirically that it is able to produce high quality embeddings in a fraction of the time needed for any published method.
Cite
Text
Anderton and Aslam. "Scaling up Ordinal Embedding: A Landmark Approach." International Conference on Machine Learning, 2019.Markdown
[Anderton and Aslam. "Scaling up Ordinal Embedding: A Landmark Approach." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/anderton2019icml-scaling/)BibTeX
@inproceedings{anderton2019icml-scaling,
title = {{Scaling up Ordinal Embedding: A Landmark Approach}},
author = {Anderton, Jesse and Aslam, Javed},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {282-290},
volume = {97},
url = {https://mlanthology.org/icml/2019/anderton2019icml-scaling/}
}