Protecting Visual Secrets Using Adversarial Nets
Abstract
Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.
Cite
Text
Raval et al. "Protecting Visual Secrets Using Adversarial Nets." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.174Markdown
[Raval et al. "Protecting Visual Secrets Using Adversarial Nets." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/raval2017cvprw-protecting/) doi:10.1109/CVPRW.2017.174BibTeX
@inproceedings{raval2017cvprw-protecting,
title = {{Protecting Visual Secrets Using Adversarial Nets}},
author = {Raval, Nisarg and Machanavajjhala, Ashwin and Cox, Landon P.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2017},
pages = {1329-1332},
doi = {10.1109/CVPRW.2017.174},
url = {https://mlanthology.org/cvprw/2017/raval2017cvprw-protecting/}
}