Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning

Abstract

We leverage convex and bilevel optimization techniques to develop a general gradient-based parameter learning framework for neural-symbolic (NeSy) systems. We demonstrate our framework with NeuPSL, a state-of-the-art NeSy architecture. To achieve this, we propose a smooth primal and dual formulation of NeuPSL inference and show learning gradients are functions of the optimal dual variables. Additionally, we develop a dual block coordinate descent algorithm for the new formulation that naturally exploits warm-starts. This leads to over $100 \times$ learning runtime improvements over the current best NeuPSL inference method. Finally, we provide extensive empirical evaluations across $8$ datasets covering a range of tasks and demonstrate our learning framework achieves up to a $16$% point prediction performance improvement over alternative learning methods.

Cite

Text

Dickens et al. "Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning." International Conference on Machine Learning, 2024.

Markdown

[Dickens et al. "Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/dickens2024icml-convex/)

BibTeX

@inproceedings{dickens2024icml-convex,
  title     = {{Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning}},
  author    = {Dickens, Charles Andrew and Gao, Changyu and Pryor, Connor and Wright, Stephen and Getoor, Lise},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {10865-10896},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/dickens2024icml-convex/}
}