Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals
Abstract
When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires the usage of Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices, and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this discrepancy to brain-age prediction from MEG data, and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
Cite
Text
Bonet et al. "Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals." International Conference on Machine Learning, 2023.Markdown
[Bonet et al. "Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/bonet2023icml-slicedwasserstein/)BibTeX
@inproceedings{bonet2023icml-slicedwasserstein,
title = {{Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals}},
author = {Bonet, Clément and Malézieux, Benoı̂t and Rakotomamonjy, Alain and Drumetz, Lucas and Moreau, Thomas and Kowalski, Matthieu and Courty, Nicolas},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {2777-2805},
volume = {202},
url = {https://mlanthology.org/icml/2023/bonet2023icml-slicedwasserstein/}
}