ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition

Abstract

In recent years, the performances of face recognition have been improved significantly by using convolution neural networks (CNN) as the feature extractors. On the other hands, in order to avoid spreading COVID-19 virus, people would wear mask even when they want to pass the face recognition system. Thus, it is necessary to improve the performance of masked face recognition so that users could utilize face recognition methods more easily. In this paper, we propose a feature extraction backbone named ResSaNet that integrates CNN (especially Residual block) and Self-attention module into the same network. By capturing the local and global information of face area simultaneously, our proposed ResSaNet could achieve promising results on both masked and non-masked testing data.

Cite

Text

Chang et al. "ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00170

Markdown

[Chang et al. "ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/chang2021iccvw-ressanet/) doi:10.1109/ICCVW54120.2021.00170

BibTeX

@inproceedings{chang2021iccvw-ressanet,
  title     = {{ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition}},
  author    = {Chang, Wei-Yi and Tsai, Ming-Ying and Lo, Shih-Chieh},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2021},
  pages     = {1468-1476},
  doi       = {10.1109/ICCVW54120.2021.00170},
  url       = {https://mlanthology.org/iccvw/2021/chang2021iccvw-ressanet/}
}