Smoothing-Based Adversarial Defense Methods for Inverse Problems

Abstract

In this paper, we propose randomized smoothing methods that aim to enhance the robustness of the linear inverse problems against adversarial attacks, in particular guaranteeing an upper bound on a suitably-defined notion of sensitivity to perturbations. In addition, we propose two novel algorithms that incorporate randomized smoothing into training, where one algorithm injects random perturbations to the input data directly, and the other algorithm adds random perturbations to the gradients during backpropagation. We conduct numerical evaluations on two of the most prominent inverse problems --- denoising and compressed sensing --- utilizing a variety of neural network estimators and datasets. In broad scenarios, these results demonstrate a strong potential of randomized smoothing for enhancing the robustness of linear inverse problems.

Cite

Text

Sun and Scarlett. "Smoothing-Based Adversarial Defense Methods for Inverse Problems." NeurIPS 2024 Workshops: AdvML-Frontiers, 2024.

Markdown

[Sun and Scarlett. "Smoothing-Based Adversarial Defense Methods for Inverse Problems." NeurIPS 2024 Workshops: AdvML-Frontiers, 2024.](https://mlanthology.org/neuripsw/2024/sun2024neuripsw-smoothingbased/)

BibTeX

@inproceedings{sun2024neuripsw-smoothingbased,
  title     = {{Smoothing-Based Adversarial Defense Methods for Inverse Problems}},
  author    = {Sun, Yang and Scarlett, Jonathan},
  booktitle = {NeurIPS 2024 Workshops: AdvML-Frontiers},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/sun2024neuripsw-smoothingbased/}
}