Minorization-Maximization for Learning Determinantal Point Processes
Abstract
A determinantal point process (DPP) is a powerful probabilistic model that generates diverse random subsets from a ground set. Since a DPP is characterized by a positive definite kernel, a DPP on a finite ground set can be parameterized by a kernel matrix. Recently, DPPs have gained attention in the machine learning community and have been applied to various practical problems; however, there is still room for further research on the learning of DPPs. In this paper, we propose a simple learning rule for full-rank DPPs based on a minorization-maximization (MM) algorithm, which monotonically increases the likelihood in each iteration. We show that our minorizer of the MM algorithm provides a tighter lower-bound compared to an existing method locally. We also generalize the algorithm for further acceleration. In our experiments on both synthetic and real-world datasets, our method outperforms existing methods in most settings. Our code is available at https://github.com/ISMHinoLab/DPPMMEstimation.
Cite
Text
Kawashima and Hino. "Minorization-Maximization for Learning Determinantal Point Processes." Transactions on Machine Learning Research, 2023.Markdown
[Kawashima and Hino. "Minorization-Maximization for Learning Determinantal Point Processes." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/kawashima2023tmlr-minorizationmaximization/)BibTeX
@article{kawashima2023tmlr-minorizationmaximization,
title = {{Minorization-Maximization for Learning Determinantal Point Processes}},
author = {Kawashima, Takahiro and Hino, Hideitsu},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/kawashima2023tmlr-minorizationmaximization/}
}