Non-Rigid Object Localization and Segmentation Using Eigenspace Representation
Abstract
This paper presents a novel non-rigid object localization and segmentation algorithm using an eigenspace representation. Previous approaches to eigenspace methods for object tracking use vectorized image regions as observations, whereas the proposed method uses each individual pixel as an observation. Localization using the pixel-wise eigenspace representation is robust to noise and occlusions. A unique feature of the approach is that it permits segmentation in addition to localization. Localization and segmentation are carried out by deriving a similarity function in the eigenspace. The algorithm is tested on synthetic and real world tracking examples to demonstrate the performance.
Cite
Text
Arif and Vela. "Non-Rigid Object Localization and Segmentation Using Eigenspace Representation." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459244Markdown
[Arif and Vela. "Non-Rigid Object Localization and Segmentation Using Eigenspace Representation." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/arif2009iccv-non/) doi:10.1109/ICCV.2009.5459244BibTeX
@inproceedings{arif2009iccv-non,
title = {{Non-Rigid Object Localization and Segmentation Using Eigenspace Representation}},
author = {Arif, Omar and Vela, Patricio A.},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2009},
pages = {803-808},
doi = {10.1109/ICCV.2009.5459244},
url = {https://mlanthology.org/iccv/2009/arif2009iccv-non/}
}