Reduced Implication-Bias Logic Loss for Neuro-Symbolic Learning
Abstract
Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as Implication Bias. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into Reduced Implication-bias Logic Loss (RILL). Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient.
Cite
Text
He et al. "Reduced Implication-Bias Logic Loss for Neuro-Symbolic Learning." Machine Learning, 2024. doi:10.1007/S10994-023-06436-4Markdown
[He et al. "Reduced Implication-Bias Logic Loss for Neuro-Symbolic Learning." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/he2024mlj-reduced/) doi:10.1007/S10994-023-06436-4BibTeX
@article{he2024mlj-reduced,
title = {{Reduced Implication-Bias Logic Loss for Neuro-Symbolic Learning}},
author = {He, Hao-Yuan and Dai, Wang-Zhou and Li, Ming},
journal = {Machine Learning},
year = {2024},
pages = {3357-3377},
doi = {10.1007/S10994-023-06436-4},
volume = {113},
url = {https://mlanthology.org/mlj/2024/he2024mlj-reduced/}
}