Probabilistic Tangent Subspace: A Unified View
Abstract
Tangent Distance (TD) is one classical method for invariant patternclassification. However, conventional TD need pre-obtain tangent vectors, which is very difficultexcept for image objects. This paper extends TD to more general pattern classification tasks. The basic assumption is that tangent vectors can be approximately represented by thepattern variations. We propose three probabilistic subspace models to encode thevariations: the linear subspace, nonlinear subspace, and manifold subspace models. These threemodels are addressed in a unified view, namely Probabilistic Tangent Subspace (PTS).Experiments show that PTS can achieve promising classification performance in non-image datasets.
Cite
Text
Lee et al. "Probabilistic Tangent Subspace: A Unified View." International Conference on Machine Learning, 2004. doi:10.1145/1015330.1015362Markdown
[Lee et al. "Probabilistic Tangent Subspace: A Unified View." International Conference on Machine Learning, 2004.](https://mlanthology.org/icml/2004/lee2004icml-probabilistic/) doi:10.1145/1015330.1015362BibTeX
@inproceedings{lee2004icml-probabilistic,
title = {{Probabilistic Tangent Subspace: A Unified View}},
author = {Lee, Jianguo and Wang, Jingdong and Zhang, Changshui and Bian, Zhaoqi},
booktitle = {International Conference on Machine Learning},
year = {2004},
doi = {10.1145/1015330.1015362},
url = {https://mlanthology.org/icml/2004/lee2004icml-probabilistic/}
}