Regularized Principal Manifolds (Kernel Machines Section)
Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.
Cite
Text
Smola et al. "Regularized Principal Manifolds (Kernel Machines Section)." Journal of Machine Learning Research, 2001.Markdown
[Smola et al. "Regularized Principal Manifolds (Kernel Machines Section)." Journal of Machine Learning Research, 2001.](https://mlanthology.org/jmlr/2001/smola2001jmlr-regularized/)BibTeX
@article{smola2001jmlr-regularized,
title = {{Regularized Principal Manifolds (Kernel Machines Section)}},
author = {Smola, Alexander J. and Mika, Sebastian and Schölkopf, Bernhard and Williamson, Robert C.},
journal = {Journal of Machine Learning Research},
year = {2001},
pages = {179-209},
volume = {1},
url = {https://mlanthology.org/jmlr/2001/smola2001jmlr-regularized/}
}