Reinforcement Learning Platform for Adversarial Black-Box Attacks with Custom Distortion Filters
Abstract
We present a Reinforcement Learning Platform for Adversarial Black-box untargeted and targeted attacks, RLAB, that allows users to select from various distortion filters to create adversarial examples. The platform uses a Reinforcement Learning agent to add minimum distortion to input images while still causing misclassification by the target model. The agent uses a novel dual-action method to explore the input image at each step to identify sensitive regions for adding distortions while removing noises that have less impact on the target model. This dual action leads to faster and more efficient convergence of the attack. The platform can also be used to measure the robustness of image classification models against specific distortion types. Also, retraining the model with adversarial samples significantly improved robustness when evaluated on benchmark datasets. The proposed platform outperforms state-of-the-art methods in terms of the average number of queries required to cause misclassification. This advances trustworthiness with a positive social impact.
Cite
Text
Sarkar et al. "Reinforcement Learning Platform for Adversarial Black-Box Attacks with Custom Distortion Filters." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I26.34976Markdown
[Sarkar et al. "Reinforcement Learning Platform for Adversarial Black-Box Attacks with Custom Distortion Filters." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/sarkar2025aaai-reinforcement/) doi:10.1609/AAAI.V39I26.34976BibTeX
@inproceedings{sarkar2025aaai-reinforcement,
title = {{Reinforcement Learning Platform for Adversarial Black-Box Attacks with Custom Distortion Filters}},
author = {Sarkar, Soumyendu and Babu, Ashwin Ramesh and Mousavi, Sajad and Gundecha, Vineet and Ghorbanpour, Sahand and Naug, Avisek and Gutiérrez, Ricardo Luna and Guillen, Antonio and Rengarajan, Desik},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {27628-27635},
doi = {10.1609/AAAI.V39I26.34976},
url = {https://mlanthology.org/aaai/2025/sarkar2025aaai-reinforcement/}
}