Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent
Abstract
We consider a natural model of online preference aggregation, where sets of preferred items R1, R2, ..., Rt, ..., along with a demand for kt items in each Rt, appear online. Without prior knowledge of (Rt, kt), the learner maintains a ranking \pit aiming that at least kt items from Rt appear high in \pi_t. This is a fundamental problem in preference aggregation with applications to e.g., ordering product or news items in web pages based on user scrolling and click patterns.
Cite
Text
Fotakis et al. "Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent." Neural Information Processing Systems, 2020.Markdown
[Fotakis et al. "Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/fotakis2020neurips-efficient/)BibTeX
@inproceedings{fotakis2020neurips-efficient,
title = {{Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent}},
author = {Fotakis, Dimitris and Lianeas, Thanasis and Piliouras, Georgios and Skoulakis, Stratis},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/fotakis2020neurips-efficient/}
}