Clustering Positive Definite Matrices by Learning Information Divergences
Abstract
Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.
Cite
Text
Stanitsas et al. "Clustering Positive Definite Matrices by Learning Information Divergences." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.155Markdown
[Stanitsas et al. "Clustering Positive Definite Matrices by Learning Information Divergences." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/stanitsas2017iccvw-clustering/) doi:10.1109/ICCVW.2017.155BibTeX
@inproceedings{stanitsas2017iccvw-clustering,
title = {{Clustering Positive Definite Matrices by Learning Information Divergences}},
author = {Stanitsas, Panagiotis and Cherian, Anoop and Morellas, Vassilios and Papanikolopoulos, Nikolaos},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {1304-1312},
doi = {10.1109/ICCVW.2017.155},
url = {https://mlanthology.org/iccvw/2017/stanitsas2017iccvw-clustering/}
}