Democratic Approximation of Lexicographic Preference Models
Abstract
Previous algorithms for learning lexicographic preference models (LPMs) produce a "best guess" LPM that is consistent with the observations. Our approach is more democratic: we do not commit to a single LPM. Instead, we approximate the target using the votes of a collection of consistent LPMs. We present two variations of this method -- "variable voting" and "model voting" -- and empirically show that these democratic algorithms outperform the existing methods. We also introduce an intuitive yet powerful learning bias to prune some of the possible LPMs. We demonstrate how this learning bias can be used with variable and model voting and show that the learning bias improves the learning curve significantly, especially when the number of observations is small.
Cite
Text
Yaman et al. "Democratic Approximation of Lexicographic Preference Models." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390307Markdown
[Yaman et al. "Democratic Approximation of Lexicographic Preference Models." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/yaman2008icml-democratic/) doi:10.1145/1390156.1390307BibTeX
@inproceedings{yaman2008icml-democratic,
title = {{Democratic Approximation of Lexicographic Preference Models}},
author = {Yaman, Fusun and Walsh, Thomas J. and Littman, Michael L. and desJardins, Marie},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {1200-1207},
doi = {10.1145/1390156.1390307},
url = {https://mlanthology.org/icml/2008/yaman2008icml-democratic/}
}