Kernel Dimensionality Reduction for Supervised Learning
Abstract
We propose a novel method of dimensionality reduction for supervised learning. Given a regression or classification problem in which we wish to predict a variable Y from an explanatory vector X, we treat the prob- lem of dimensionality reduction as that of finding a low-dimensional “ef- fective subspace” of X which retains the statistical relationship between X and Y . We show that this problem can be formulated in terms of conditional independence. To turn this formulation into an optimization problem, we characterize the notion of conditional independence using covariance operators on reproducing kernel Hilbert spaces; this allows us to derive a contrast function for estimation of the effective subspace. Un- like many conventional methods, the proposed method requires neither assumptions on the marginal distribution of X, nor a parametric model of the conditional distribution of Y .
Cite
Text
Fukumizu et al. "Kernel Dimensionality Reduction for Supervised Learning." Neural Information Processing Systems, 2003.Markdown
[Fukumizu et al. "Kernel Dimensionality Reduction for Supervised Learning." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/fukumizu2003neurips-kernel/)BibTeX
@inproceedings{fukumizu2003neurips-kernel,
title = {{Kernel Dimensionality Reduction for Supervised Learning}},
author = {Fukumizu, Kenji and Bach, Francis R. and Jordan, Michael I.},
booktitle = {Neural Information Processing Systems},
year = {2003},
pages = {81-88},
url = {https://mlanthology.org/neurips/2003/fukumizu2003neurips-kernel/}
}