Disrupting Model Training with Adversarial Shortcuts
Abstract
When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes. Successful model training may be preventable with carefully designed dataset modifications, and we present a proof-of-concept approach for the image classification setting. We propose methods based on the notion of adversarial shortcuts, which encourage models to rely on non-robust signals rather than semantic features, and our experiments demonstrate that these measures successfully prevent deep learning models from achieving high accuracy on real, unmodified data examples
Cite
Text
Evtimov et al. "Disrupting Model Training with Adversarial Shortcuts." ICML 2021 Workshops: AML, 2021.Markdown
[Evtimov et al. "Disrupting Model Training with Adversarial Shortcuts." ICML 2021 Workshops: AML, 2021.](https://mlanthology.org/icmlw/2021/evtimov2021icmlw-disrupting/)BibTeX
@inproceedings{evtimov2021icmlw-disrupting,
title = {{Disrupting Model Training with Adversarial Shortcuts}},
author = {Evtimov, Ivan and Covert, Ian Connick and Kusupati, Aditya and Kohno, Tadayoshi},
booktitle = {ICML 2021 Workshops: AML},
year = {2021},
url = {https://mlanthology.org/icmlw/2021/evtimov2021icmlw-disrupting/}
}