A Graphical Model Approach for Matching Partial Signatures
Abstract
In this paper, we present a novel partial signature matching method using graphical models. Shape context features are extracted from the contour of signatures to capture local variations, and K-means clustering is used to build a visual vocabulary from a set of reference signatures. To describe the signatures, supervised latent Dirichlet allocation is used to learn the latent distributions of the salient regions over the visual vocabulary and hierarchical Dirichlet processes are implemented to infer the number of salient regions needed. Our work is evaluated on three datasets derived from the DS-I Tobacco signature dataset with clean signatures and the DS-II UMD dataset with signatures with different degradations. The results show the effectiveness of the approach for both the partial and full signature matching.
Cite
Text
Du et al. "A Graphical Model Approach for Matching Partial Signatures." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298753Markdown
[Du et al. "A Graphical Model Approach for Matching Partial Signatures." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/du2015cvpr-graphical/) doi:10.1109/CVPR.2015.7298753BibTeX
@inproceedings{du2015cvpr-graphical,
title = {{A Graphical Model Approach for Matching Partial Signatures}},
author = {Du, Xianzhi and Doermann, David and Abd-Almageed, Wael},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298753},
url = {https://mlanthology.org/cvpr/2015/du2015cvpr-graphical/}
}