Invariant Feature Extraction and Classification in Kernel Spaces
Abstract
In hyperspectral imagery one pixel typically consists of a mixture of the reflectance spectra of several materials, where the mixture coefficients correspond to the abundances of the constituting ma(cid:173) terials. We assume linear combinations of reflectance spectra with some additive normal sensor noise and derive a probabilistic MAP framework for analyzing hyperspectral data. As the material re(cid:173) flectance characteristics are not know a priori, we face the problem of unsupervised linear unmixing. The incorporation of different prior information (e.g. positivity and normalization of the abun(cid:173) dances) naturally leads to a family of interesting algorithms, for example in the noise-free case yielding an algorithm that can be understood as constrained independent component analysis (ICA). Simulations underline the usefulness of our theory.
Cite
Text
Mika et al. "Invariant Feature Extraction and Classification in Kernel Spaces." Neural Information Processing Systems, 1999.Markdown
[Mika et al. "Invariant Feature Extraction and Classification in Kernel Spaces." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/mika1999neurips-invariant/)BibTeX
@inproceedings{mika1999neurips-invariant,
title = {{Invariant Feature Extraction and Classification in Kernel Spaces}},
author = {Mika, Sebastian and Rätsch, Gunnar and Weston, Jason and Schölkopf, Bernhard and Smola, Alex J. and Müller, Klaus-Robert},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {526-532},
url = {https://mlanthology.org/neurips/1999/mika1999neurips-invariant/}
}