Improving Relevance Prediction with Transfer Learning in Large-Scale Retrieval Systems
Abstract
Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance. However, collecting users' explicit feedback is costly. In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems. Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information. By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items. Applying these learned representations to an industrial retrieval system has delivered significant improvements.
Cite
Text
Wang et al. "Improving Relevance Prediction with Transfer Learning in Large-Scale Retrieval Systems." ICML 2019 Workshops: AMTL, 2019.Markdown
[Wang et al. "Improving Relevance Prediction with Transfer Learning in Large-Scale Retrieval Systems." ICML 2019 Workshops: AMTL, 2019.](https://mlanthology.org/icmlw/2019/wang2019icmlw-improving/)BibTeX
@inproceedings{wang2019icmlw-improving,
title = {{Improving Relevance Prediction with Transfer Learning in Large-Scale Retrieval Systems}},
author = {Wang, Ruoxi and Zhao, Zhe and Yi, Xinyang and Yang, Ji and Cheng, Derek Zhiyuan and Hong, Lichan and Tjoa, Steve and Kang, Jieqi and Ettinger, Evan and Chi, Ed},
booktitle = {ICML 2019 Workshops: AMTL},
year = {2019},
url = {https://mlanthology.org/icmlw/2019/wang2019icmlw-improving/}
}