A Multiscale Framework for Blind Separation of Linearly Mixed Signals
Abstract
We consider the problem of blind separation of unknown source signals or images from a given set of their linear mixtures. It was discovered recently that exploiting the sparsity of sources and their mixtures, once they are projected onto a proper space of sparse representation, improves the quality of separation. In this study we take advantage of the properties of multiscale transforms, such as wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We then study how the separation error is affected by the sparsity of decomposition coefficients, and by the misfit between the probabilistic model of these coefficients and their actual distribution. Our error estimator, based on the Taylor expansion of the quasi-ML function, is used in selection of the best subsets of coefficients and utilized, in turn, in further separation. The performance of the algorithm is evaluated by using noise-free and noisy data. Experiments with simulated signals, musical sounds and images, demonstrate significant improvement of separation quality over previously reported results.
Cite
Text
Kisilev et al. "A Multiscale Framework for Blind Separation of Linearly Mixed Signals." Journal of Machine Learning Research, 2003.Markdown
[Kisilev et al. "A Multiscale Framework for Blind Separation of Linearly Mixed Signals." Journal of Machine Learning Research, 2003.](https://mlanthology.org/jmlr/2003/kisilev2003jmlr-multiscale/)BibTeX
@article{kisilev2003jmlr-multiscale,
title = {{A Multiscale Framework for Blind Separation of Linearly Mixed Signals}},
author = {Kisilev, Pavel and Zibulevsky, Michael and Zeevi, Yehoshua Y.},
journal = {Journal of Machine Learning Research},
year = {2003},
pages = {1339-1363},
volume = {4},
url = {https://mlanthology.org/jmlr/2003/kisilev2003jmlr-multiscale/}
}