Shared Kernel Information Embedding for Discriminative Inference
Abstract
Latent variable models (LVM), like the shared-GPLVM and the spectral latent variable model, help mitigate over-fitting when learning discriminative methods from small or moderately sized training sets. Nevertheless, existing methods suffer from several problems: (1) complexity; (2) the lack of explicit mappings to and from the latent space; (3) an inability to cope with multi-modality; and (4) the lack of a well-defined density over the latent space. We propose a LVM called the shared kernel information embedding (sKIE). It defines a coherent density over a latent space and multiple input/output spaces (e.g., image features and poses), and it is easy to condition on a latent state, or on combinations of the input/output states. Learning is quadratic, and it works well on small datasets. With datasets too large to learn a coherent global model, one can use sKIE to learn local online models. sKIE permits missing data during inference, and partially labelled data during learning. We use sKIE for human pose inference.
Cite
Text
Sigal et al. "Shared Kernel Information Embedding for Discriminative Inference." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206576Markdown
[Sigal et al. "Shared Kernel Information Embedding for Discriminative Inference." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/sigal2009cvpr-shared/) doi:10.1109/CVPR.2009.5206576BibTeX
@inproceedings{sigal2009cvpr-shared,
title = {{Shared Kernel Information Embedding for Discriminative Inference}},
author = {Sigal, Leonid and Memisevic, Roland and Fleet, David J.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2009},
pages = {2852-2859},
doi = {10.1109/CVPR.2009.5206576},
url = {https://mlanthology.org/cvpr/2009/sigal2009cvpr-shared/}
}