Discrete Multiple Kernel K-Means
Abstract
The multiple kernel k-means (MKKM) and its variants utilize complementary information from different kernels, achieving better performance than kernel k-means (KKM). However, the optimization procedures of previous works all comprise two stages, learning the continuous relaxed label matrix and obtaining the discrete one by extra discretization procedures. Such a two-stage strategy gives rise to a mismatched problem and severe information loss. To address this problem, we elaborate a novel Discrete Multiple Kernel k-means (DMKKM) model solved by an optimization algorithm that directly obtains the cluster indicator matrix without subsequent discretization procedures. Moreover, DMKKM can strictly measure the correlations among kernels, which is capable of enhancing kernel fusion by reducing redundancy and improving diversity. What’s more, DMKKM is parameter-free avoiding intractable hyperparameter tuning, which makes it feasible in practical applications. Extensive experiments illustrated the effectiveness and superiority of the proposed model.
Cite
Text
Wang et al. "Discrete Multiple Kernel K-Means." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/428Markdown
[Wang et al. "Discrete Multiple Kernel K-Means." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-discrete/) doi:10.24963/IJCAI.2021/428BibTeX
@inproceedings{wang2021ijcai-discrete,
title = {{Discrete Multiple Kernel K-Means}},
author = {Wang, Rong and Lu, Jitao and Lu, Yihang and Nie, Feiping and Li, Xuelong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {3111-3117},
doi = {10.24963/IJCAI.2021/428},
url = {https://mlanthology.org/ijcai/2021/wang2021ijcai-discrete/}
}