Unadversarial Examples: Designing Objects for Robust Vision
Abstract
We study a class of computer vision settings wherein one can modify the design of the objects being recognized. We develop a framework that leverages this capability---and deep networks' unusual sensitivity to input perturbations---to design ``robust objects,'' i.e., objects that are explicitly optimized to be confidently classified. Our framework yields improved performance on standard benchmarks, a simulated robotics environment, and physical-world experiments.
Cite
Text
Salman et al. "Unadversarial Examples: Designing Objects for Robust Vision." Neural Information Processing Systems, 2021.Markdown
[Salman et al. "Unadversarial Examples: Designing Objects for Robust Vision." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/salman2021neurips-unadversarial/)BibTeX
@inproceedings{salman2021neurips-unadversarial,
title = {{Unadversarial Examples: Designing Objects for Robust Vision}},
author = {Salman, Hadi and Ilyas, Andrew and Engstrom, Logan and Vemprala, Sai and Madry, Aleksander and Kapoor, Ashish},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/salman2021neurips-unadversarial/}
}