Online Rank Aggregation
Abstract
We consider an online learning framework where the task is to predict a permutation which represents a ranking of n fixed objects. At each trial, the learner incurs a loss defined as Kendall tau distance between the predicted permutation and the true permutation given by the adversary. This setting is quite natural in many situations such as information retrieval and recommendation tasks. We prove a lower bound of the cumulative loss and hardness results. Then, we propose an algorithm for this problem and prove its relative loss bound which shows our algorithm is close to optimal.
Cite
Text
Yasutake et al. "Online Rank Aggregation." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.Markdown
[Yasutake et al. "Online Rank Aggregation." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.](https://mlanthology.org/acml/2012/yasutake2012acml-online/)BibTeX
@inproceedings{yasutake2012acml-online,
title = {{Online Rank Aggregation}},
author = {Yasutake, Shota and Hatano, Kohei and Takimoto, Eiji and Takeda, Masayuki},
booktitle = {Proceedings of the Fourth Asian Conference on Machine Learning},
year = {2012},
pages = {539-553},
volume = {25},
url = {https://mlanthology.org/acml/2012/yasutake2012acml-online/}
}