Hide and Seek: Uncovering Facial Occlusion with Variable-Threshold Robust PCA
Abstract
Face images are very important in human social activities, which can be severely hampered when they are corrupted by occluders such as eyeglasses, face marks, and scarfs. Existing methods for removing occlusions in face images can be grouped into three broad categories, namely PCA, robust PCA (RPCA), and sparse coding. The major weaknesses of these methods are inconsistent performance across test conditions and possible corruption of unoccluded part of the recovered target image. This paper presents variable-threshold RPCA (VRPCA) method based on RPCA with variable thresholding. Comprehensive tests show that VRPCA is able to preserve the unoccluded parts of the target image with practically zero error. Compared to existing methods, it is more accurate, reliable, and consistent across various test conditions.
Cite
Text
Leow et al. "Hide and Seek: Uncovering Facial Occlusion with Variable-Threshold Robust PCA." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477579Markdown
[Leow et al. "Hide and Seek: Uncovering Facial Occlusion with Variable-Threshold Robust PCA." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/leow2016wacv-hide/) doi:10.1109/WACV.2016.7477579BibTeX
@inproceedings{leow2016wacv-hide,
title = {{Hide and Seek: Uncovering Facial Occlusion with Variable-Threshold Robust PCA}},
author = {Leow, Wee Kheng and Li, Guodong and Lai, Jian and Sim, Terence and Sharma, Vaishali},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2016},
pages = {1-8},
doi = {10.1109/WACV.2016.7477579},
url = {https://mlanthology.org/wacv/2016/leow2016wacv-hide/}
}