Learning Plackett-Luce Mixtures from Partial Preferences

Abstract

We propose an EM-based framework for learning Plackett-Luce model and its mixtures from partial orders. The core of our framework is the efficient sampling of linear extensions of partial orders under Plackett-Luce model. We propose two Markov Chain Monte Carlo (MCMC) samplers: Gibbs sampler and the generalized repeated insertion method tuned by MCMC (GRIM-MCMC), and prove the efficiency of GRIM-MCMC for a large class of preferences.Experiments on synthetic data show that the algorithm with Gibbs sampler outperforms that with GRIM-MCMC. Experiments on real-world data show that the likelihood of test dataset increases when (i) partial orders provide more information; or (ii) the number of components in mixtures of PlackettLuce model increases.

Cite

Text

Liu et al. "Learning Plackett-Luce Mixtures from Partial Preferences." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33014328

Markdown

[Liu et al. "Learning Plackett-Luce Mixtures from Partial Preferences." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/liu2019aaai-learning-b/) doi:10.1609/AAAI.V33I01.33014328

BibTeX

@inproceedings{liu2019aaai-learning-b,
  title     = {{Learning Plackett-Luce Mixtures from Partial Preferences}},
  author    = {Liu, Ao and Zhao, Zhibing and Liao, Chao and Lu, Pinyan and Xia, Lirong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {4328-4335},
  doi       = {10.1609/AAAI.V33I01.33014328},
  url       = {https://mlanthology.org/aaai/2019/liu2019aaai-learning-b/}
}