Safe Neurosymbolic Learning with Differentiable Symbolic Execution

Abstract

We study the problem of learning verifiably safe parameters for programs that use neural networks as well as symbolic, human-written code. Such neurosymbolic programs arise in many safety-critical domains. However, because they need not be differentiable, it is hard to learn their parameters using existing gradient-based approaches to safe learning. Our method, Differentiable Symbolic Execution (DSE), samples control flow paths in a program, symbolically constructs worst-case "safety loss" along these paths, and backpropagates the gradients of these losses through program operations using a generalization of the REINFORCE estimator. We evaluate the method on a mix of synthetic tasks and real-world benchmarks. Our experiments show that DSE significantly outperforms the state-of-the-art DiffAI method on these tasks.

Cite

Text

Yang and Chaudhuri. "Safe Neurosymbolic Learning with Differentiable Symbolic Execution." International Conference on Learning Representations, 2022.

Markdown

[Yang and Chaudhuri. "Safe Neurosymbolic Learning with Differentiable Symbolic Execution." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/yang2022iclr-safe/)

BibTeX

@inproceedings{yang2022iclr-safe,
  title     = {{Safe Neurosymbolic Learning with Differentiable Symbolic Execution}},
  author    = {Yang, Chenxi and Chaudhuri, Swarat},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/yang2022iclr-safe/}
}