Joint Sparsity-Based Robust Multimodal Biometrics Recognition
Abstract
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a novel multimodal multivariate sparse representation method for multimodal biometrics recognition, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information between biometric modalities. Furthermore, the model is modified to make it robust to noise and occlusion. The resulting optimization problem is solved using an efficient alternative direction method. Experiments on a challenging public dataset show that our method compares favorably with competing fusion-based methods.
Cite
Text
Shekhar et al. "Joint Sparsity-Based Robust Multimodal Biometrics Recognition." European Conference on Computer Vision Workshops, 2012. doi:10.1007/978-3-642-33885-4_37Markdown
[Shekhar et al. "Joint Sparsity-Based Robust Multimodal Biometrics Recognition." European Conference on Computer Vision Workshops, 2012.](https://mlanthology.org/eccvw/2012/shekhar2012eccvw-joint/) doi:10.1007/978-3-642-33885-4_37BibTeX
@inproceedings{shekhar2012eccvw-joint,
title = {{Joint Sparsity-Based Robust Multimodal Biometrics Recognition}},
author = {Shekhar, Sumit and Patel, Vishal M. and Nasrabadi, Nasser M. and Chellappa, Rama},
booktitle = {European Conference on Computer Vision Workshops},
year = {2012},
pages = {365-374},
doi = {10.1007/978-3-642-33885-4_37},
url = {https://mlanthology.org/eccvw/2012/shekhar2012eccvw-joint/}
}