Supervised Learning with Background Knowledge
Abstract
We consider the task of supervised learning while focusing on the impact that background knowledge may have on the accuracy and robustness of learned classifiers. We consider three types of background knowledge: causal domain knowledge, functional dependencies and logical constraints. Our findings are set in the context of an empirical study that compares two classes of classifiers: Arithmetic Circuit (AC) classifiers compiled from Bayesian network models with varying degrees of background knowledge, and Convolutional Neural Network (CNN) classifiers. We report on the accuracy and robustness of such classifiers on two tasks concerned with recognizing synthesized shapes in noisy images. We show that classifiers that encode background knowledge need much less data to attain certain accuracies and are more robust against noise level in the data and also against mismatches between noise patterns in the training and testing data.
Cite
Text
Chen et al. "Supervised Learning with Background Knowledge." Proceedings of pgm 2020, 2020.Markdown
[Chen et al. "Supervised Learning with Background Knowledge." Proceedings of pgm 2020, 2020.](https://mlanthology.org/pgm/2020/chen2020pgm-supervised/)BibTeX
@inproceedings{chen2020pgm-supervised,
title = {{Supervised Learning with Background Knowledge}},
author = {Chen, Yizuo and Choi, Arthur and Darwiche, Adnan},
booktitle = {Proceedings of pgm 2020},
year = {2020},
pages = {89-100},
volume = {138},
url = {https://mlanthology.org/pgm/2020/chen2020pgm-supervised/}
}