Adversarial Robustness of Program Synthesis Models
Abstract
The resurgence of automatic program synthesis has been observed with the rise of deep learning. In this paper, we study the behaviour of the program synthesis model under adversarial settings. Our experiments suggest that these program synthesis models are prone to adversarial attacks. The proposed transformer model has higher adversarial performance than the current state-of-the-art program synthesis model. We specifically experiment with AlgoLisp DSL-based generative models and showcase the existence of significant dataset bias through different classes of adversarial examples.
Cite
Text
Anand et al. "Adversarial Robustness of Program Synthesis Models." NeurIPS 2021 Workshops: AIPLANS, 2021.Markdown
[Anand et al. "Adversarial Robustness of Program Synthesis Models." NeurIPS 2021 Workshops: AIPLANS, 2021.](https://mlanthology.org/neuripsw/2021/anand2021neuripsw-adversarial/)BibTeX
@inproceedings{anand2021neuripsw-adversarial,
title = {{Adversarial Robustness of Program Synthesis Models}},
author = {Anand, Mrinal and Kayal, Pratik and Singh, Mayank},
booktitle = {NeurIPS 2021 Workshops: AIPLANS},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/anand2021neuripsw-adversarial/}
}