Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract)
Abstract
Convolutional neural networks (CNNs) are being increasingly adopted in medical imaging. However, in the race for developing accurate models, their robustness is often overlooked. This elicits a significant concern given the safety-critical nature of the healthcare system. Here, we highlight the vulnerability of CNNs against a sporadic and naturalistic adversarial patch attack (SNAP). We train SNAP to mislead the ResNet50 model predicting metastasis in histopathological scans of lymph node sections, lowering the accuracy by 27%. This work emphasizes the need for defense strategies before deploying CNNs in critical healthcare settings.
Cite
Text
Kumar et al. "Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30468Markdown
[Kumar et al. "Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kumar2024aaai-attacking/) doi:10.1609/AAAI.V38I21.30468BibTeX
@inproceedings{kumar2024aaai-attacking,
title = {{Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract)}},
author = {Kumar, Daya and Sharma, Abhijith and Narayan, Apurva},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {23550-23551},
doi = {10.1609/AAAI.V38I21.30468},
url = {https://mlanthology.org/aaai/2024/kumar2024aaai-attacking/}
}