Deep Saliency Prior for Reducing Visual Distraction

Abstract

Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images. Given an image and a mask specifying the region to edit, we backpropagate through a state-of-the-art saliency model to parameterize a differentiable editing operator, such that the saliency within the masked region is reduced. We demonstrate several operators, including: a recoloring operator, which learns to apply a color transform that camouflages and blends distractors into their surroundings; a warping operator, which warps less salient image regions to cover distractors, gradually collapsing objects into themselves and effectively removing them (an effect akin to inpainting); a GAN operator, which uses a semantic prior to fully replace image regions with plausible, less salient alternatives. The resulting effects are consistent with cognitive research on the human visual system (e.g., since color mismatch is salient, the recoloring operator learns to harmonize objects' colors with their surrounding to reduce their saliency), and, importantly, are all achieved solely through the guidance of the pretrained saliency model. We present results on a variety of natural images and conduct a perceptual study to evaluate and validate the changes in viewers' eye-gaze between the original images and our edited results.

Cite

Text

Aberman et al. "Deep Saliency Prior for Reducing Visual Distraction." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01923

Markdown

[Aberman et al. "Deep Saliency Prior for Reducing Visual Distraction." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/aberman2022cvpr-deep/) doi:10.1109/CVPR52688.2022.01923

BibTeX

@inproceedings{aberman2022cvpr-deep,
  title     = {{Deep Saliency Prior for Reducing Visual Distraction}},
  author    = {Aberman, Kfir and He, Junfeng and Gandelsman, Yossi and Mosseri, Inbar and Jacobs, David E. and Kohlhoff, Kai and Pritch, Yael and Rubinstein, Michael},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {19851-19860},
  doi       = {10.1109/CVPR52688.2022.01923},
  url       = {https://mlanthology.org/cvpr/2022/aberman2022cvpr-deep/}
}