Simplifying Mixture Models Through Function Approximation
Abstract
Finite mixture model is a powerful tool in many statistical learning problems. In this paper, we propose a general, structure-preserving approach to reduce its model complexity, which can bring significant computational benefits in many applications. The basic idea is to group the original mixture components into compact clusters, and then minimize an upper bound on the approximation error between the original and simplified models. By adopting the L2 norm as the dis- tance measure between mixture models, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measure. Moreover, the complexity of our algorithm is only linear in the sample size and dimensional- ity. Experiments on density estimation and clustering-based image segmentation demonstrate its outstanding performance in terms of both speed and accuracy.
Cite
Text
Zhang and Kwok. "Simplifying Mixture Models Through Function Approximation." Neural Information Processing Systems, 2006.Markdown
[Zhang and Kwok. "Simplifying Mixture Models Through Function Approximation." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/zhang2006neurips-simplifying/)BibTeX
@inproceedings{zhang2006neurips-simplifying,
title = {{Simplifying Mixture Models Through Function Approximation}},
author = {Zhang, Kai and Kwok, James T.},
booktitle = {Neural Information Processing Systems},
year = {2006},
pages = {1577-1584},
url = {https://mlanthology.org/neurips/2006/zhang2006neurips-simplifying/}
}