Evaluating Durability: Benchmark Insights into Image and Text Watermarking

Abstract

As large models become increasingly prevalent, watermarking has emerged as a crucial technology for copyright protection, authenticity verification, and content tracking. The rise of multimodal applications further amplifies the importance of effective watermarking techniques. While watermark robustness is critical for real-world deployment, the current understanding of watermark robustness against various forms of corruption remains limited. Our study evaluates watermark robustness in both image and text domains, testing against an extensive set of 100 image perturbations and 63 text perturbations. The results reveal significant vulnerabilities in contemporary watermarking approaches - detection accuracy deteriorates by more than 50% under common perturbations, highlighting a critical gap between current capabilities and practical requirements. These findings emphasize the urgent need for more robust watermarking methods that can withstand real-world disturbances. Our project website can be found at https://mmwatermark-robustness.github.io/.

Cite

Text

Qiu et al. "Evaluating Durability: Benchmark Insights into Image and Text Watermarking." Data-centric Machine Learning Research, 2024.

Markdown

[Qiu et al. "Evaluating Durability: Benchmark Insights into Image and Text Watermarking." Data-centric Machine Learning Research, 2024.](https://mlanthology.org/dmlr/2024/qiu2024dmlr-evaluating/)

BibTeX

@article{qiu2024dmlr-evaluating,
  title     = {{Evaluating Durability: Benchmark Insights into Image and Text Watermarking}},
  author    = {Qiu, Jielin and Han, William and Zhao, Xuandong and Long, Shangbang and Faloutsos, Christos and Li, Lei},
  journal   = {Data-centric Machine Learning Research},
  year      = {2024},
  pages     = {1-44},
  volume    = {2},
  url       = {https://mlanthology.org/dmlr/2024/qiu2024dmlr-evaluating/}
}