An MDP-Based Recommender System
Abstract
Typical recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential optimization problem and, consequently, that Markov decision processes (MDPs) provide a more appropriate model for recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation and the expected value of each recommendation. To succeed in practice, an MDP-based recommender system must employ a strong initial model, must be solvable quickly, and should not consume too much memory. In this paper, we describe our particular MDP model, its initialization using a predictive model, the solution and update algorithm, and its actual performance on a commercial site. We also describe the particular predictive model we used which outperforms previous models. Our system is one of a small number of commercially deployed recommender systems. As far as we know, it is the first to report experimental analysis conducted on a real commercial site. These results validate the commercial value of recommender systems, and in particular, of our MDP-based approach.
Cite
Text
Shani et al. "An MDP-Based Recommender System." Journal of Machine Learning Research, 2005.Markdown
[Shani et al. "An MDP-Based Recommender System." Journal of Machine Learning Research, 2005.](https://mlanthology.org/jmlr/2005/shani2005jmlr-mdpbased/)BibTeX
@article{shani2005jmlr-mdpbased,
title = {{An MDP-Based Recommender System}},
author = {Shani, Guy and Heckerman, David and Brafman, Ronen I.},
journal = {Journal of Machine Learning Research},
year = {2005},
pages = {1265-1295},
volume = {6},
url = {https://mlanthology.org/jmlr/2005/shani2005jmlr-mdpbased/}
}