Certified Neural Network Watermarks with Randomized Smoothing
Abstract
Watermarking is a commonly used strategy to protect creators’ rights to digital images, videos and audio. Recently, watermarking methods have been extended to deep learning models – in principle, the watermark should be preserved when an adversary tries to copy the model. However, in practice, watermarks can often be removed by an intelligent adversary. Several papers have proposed watermarking methods that claim to be empirically resistant to different types of removal attacks, but these new techniques often fail in the face of new or better-tuned adversaries. In this paper, we propose the first certifiable watermarking method. Using the randomized smoothing technique, we show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain $\ell_2$ threshold. In addition to being certifiable, our watermark is also empirically more robust compared to previous watermarking methods.
Cite
Text
Bansal et al. "Certified Neural Network Watermarks with Randomized Smoothing." International Conference on Machine Learning, 2022.Markdown
[Bansal et al. "Certified Neural Network Watermarks with Randomized Smoothing." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/bansal2022icml-certified/)BibTeX
@inproceedings{bansal2022icml-certified,
title = {{Certified Neural Network Watermarks with Randomized Smoothing}},
author = {Bansal, Arpit and Chiang, Ping-Yeh and Curry, Michael J and Jain, Rajiv and Wigington, Curtis and Manjunatha, Varun and Dickerson, John P and Goldstein, Tom},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {1450-1465},
volume = {162},
url = {https://mlanthology.org/icml/2022/bansal2022icml-certified/}
}