Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Abstract
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Cite
Text
Yang et al. "Incorporating Interpretable Output Constraints in Bayesian Neural Networks." Neural Information Processing Systems, 2020.Markdown
[Yang et al. "Incorporating Interpretable Output Constraints in Bayesian Neural Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/yang2020neurips-incorporating/)BibTeX
@inproceedings{yang2020neurips-incorporating,
title = {{Incorporating Interpretable Output Constraints in Bayesian Neural Networks}},
author = {Yang, Wanqian and Lorch, Lars and Graule, Moritz and Lakkaraju, Himabindu and Doshi-Velez, Finale},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/yang2020neurips-incorporating/}
}