Detecting Real-Time Deep-Fake Videos Using Active Illumination
Abstract
While many have grown suspicious of viral images and videos found online, there is a general sense that we can and should trust that the person on the other end of our videoconferencing call is who it purports to be. The real-time creation of sophisticated deep fakes, however, is making it more difficult to trust even live video calls. Detecting deep fakes in real time introduces new challenges as compared to off-line forensic analyses. We describe a technique for detecting, in real-time, deep-fake videos transmitted over a live video-conferencing application. This technique leverages the fact that a video call typically places a user in front of a light source (the computer display) which can be manipulated to induce a controlled change in the appearance of the user’s face. Deviations of the expected change in appearance over time can be measured in real time and used to verify the authenticity of a video-call participant.
Cite
Text
Gerstner and Farid. "Detecting Real-Time Deep-Fake Videos Using Active Illumination." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00015Markdown
[Gerstner and Farid. "Detecting Real-Time Deep-Fake Videos Using Active Illumination." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/gerstner2022cvprw-detecting/) doi:10.1109/CVPRW56347.2022.00015BibTeX
@inproceedings{gerstner2022cvprw-detecting,
title = {{Detecting Real-Time Deep-Fake Videos Using Active Illumination}},
author = {Gerstner, Candice R. and Farid, Hany},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {53-60},
doi = {10.1109/CVPRW56347.2022.00015},
url = {https://mlanthology.org/cvprw/2022/gerstner2022cvprw-detecting/}
}