Learning to Anonymize Faces for Privacy Preserving Action Detection
Abstract
There is an increasing concern in computer vision devices invading the privacy of their users. We want the camera systems/robots to recognize important events and assist human daily life by understanding its videos, but we also want to ensure that they do not intrude people's privacy. In this paper, we propose a new principled approach for learning a video anonymizer. We use an adversarial training setting in which two competing systems fight: (1) a video anonymizer that modifies the original video to remove privacy-sensitive information (i.e., human face) while still trying to maximize spatial action detection performance, and (2) a discriminator that tries to extract privacy-sensitive information from such anonymized videos. The end goal is for the video anonymizer to perform a pixel-level modification of video frames to anonymize each person's face, while minimizing the effect on action detection performance. We experimentally confirm the benefit of our approach particularly compared to conventional hand-crafted video/face anonymization methods including masking, blurring, and noise adding.
Cite
Text
Ren et al. "Learning to Anonymize Faces for Privacy Preserving Action Detection." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01246-5_38Markdown
[Ren et al. "Learning to Anonymize Faces for Privacy Preserving Action Detection." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/ren2018eccv-learning/) doi:10.1007/978-3-030-01246-5_38BibTeX
@inproceedings{ren2018eccv-learning,
title = {{Learning to Anonymize Faces for Privacy Preserving Action Detection}},
author = {Ren, Zhongzheng and Jae Lee, Yong and Ryoo, Michael S.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01246-5_38},
url = {https://mlanthology.org/eccv/2018/ren2018eccv-learning/}
}