Logically Consistent Adversarial Attacks for Soft Theorem Provers

Abstract

Recent efforts within the AI community have yielded impressive results towards “soft theorem proving” over natural language sentences using language models. We propose a novel, generative adversarial framework for probing and improving these models’ reasoning capabilities. Adversarial attacks in this domain suffer from the logical inconsistency problem, whereby perturbations to the input may alter the label. Our Logically consistent AdVersarial Attacker, LAVA, addresses this by combining a structured generative process with a symbolic solver, guaranteeing logical consistency. Our framework successfully generates adversarial attacks and identifies global weaknesses common across multiple target models. Our analyses reveal naive heuristics and vulnerabilities in these models’ reasoning capabilities, exposing an incomplete grasp of logical deduction under logic programs. Finally, in addition to effective probing of these models, we show that training on the generated samples improves the target model’s performance.

Cite

Text

Gaskell et al. "Logically Consistent Adversarial Attacks for Soft Theorem Provers." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/573

Markdown

[Gaskell et al. "Logically Consistent Adversarial Attacks for Soft Theorem Provers." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/gaskell2022ijcai-logically/) doi:10.24963/IJCAI.2022/573

BibTeX

@inproceedings{gaskell2022ijcai-logically,
  title     = {{Logically Consistent Adversarial Attacks for Soft Theorem Provers}},
  author    = {Gaskell, Alexander and Miao, Yishu and Toni, Francesca and Specia, Lucia},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {4129-4135},
  doi       = {10.24963/IJCAI.2022/573},
  url       = {https://mlanthology.org/ijcai/2022/gaskell2022ijcai-logically/}
}