Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods
Abstract
A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF can be seen as a binary classifier kappa over boolean domains. The size of an explanation of a positive decision taken by a DNF kappa is bounded by the size of the terms in kappa, since we can explain a positive decision by giving a term of kappa that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those kappa for which both kappa and its complement can be expressed as DNFs composed of terms of bounded size. In this paper, we investigate the family of k-DNFs whose complements can also be expressed as k-DNFs. We compare two such families, namely depth-k decision trees and nested k-DNFs, a novel family of models. Experimental evidence indicates that nested k-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.
Cite
Text
Tschalzev et al. "Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/555Markdown
[Tschalzev et al. "Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/tschalzev2024ijcai-enabling/) doi:10.24963/ijcai.2024/555BibTeX
@inproceedings{tschalzev2024ijcai-enabling,
title = {{Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods}},
author = {Tschalzev, Andrej and Nitschke, Paul and Kirchdorfer, Lukas and Lüdtke, Stefan and Bartelt, Christian and Stuckenschmidt, Heiner},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {5018-5026},
doi = {10.24963/ijcai.2024/555},
url = {https://mlanthology.org/ijcai/2024/tschalzev2024ijcai-enabling/}
}