A Domain Generalization Perspective on Listwise Context Modeling

Abstract

As one of the most popular techniques for solving the ranking problem in information retrieval, Learning-to-rank (LETOR) has received a lot of attention both in academia and industry due to its importance in a wide variety of data mining applications. However, most of existing LETOR approaches choose to learn a single global ranking function to handle all queries, and ignore the substantial differences that exist between queries. In this paper, we propose a domain generalization strategy to tackle this problem. We propose QueryInvariant Listwise Context Modeling (QILCM), a novel neural architecture which eliminates the detrimental influence of inter-query variability by learning query-invariant latent representations, such that the ranking system could generalize better to unseen queries. We evaluate our techniques on benchmark datasets, demonstrating that QILCM outperforms previous state-of-the-art approaches by a substantial margin.

Cite

Text

Zhu et al. "A Domain Generalization Perspective on Listwise Context Modeling." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015965

Markdown

[Zhu et al. "A Domain Generalization Perspective on Listwise Context Modeling." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/zhu2019aaai-domain/) doi:10.1609/AAAI.V33I01.33015965

BibTeX

@inproceedings{zhu2019aaai-domain,
  title     = {{A Domain Generalization Perspective on Listwise Context Modeling}},
  author    = {Zhu, Lin and Chen, Yihong and He, Bowen},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5965-5972},
  doi       = {10.1609/AAAI.V33I01.33015965},
  url       = {https://mlanthology.org/aaai/2019/zhu2019aaai-domain/}
}