Improving Neural Additive Models with Bayesian Principles
Abstract
Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks. However, they lack inherent mechanisms that provide calibrated uncertainties and enable selection of relevant features and interactions. Approaching NAMs from a Bayesian perspective, we augment them in three primary ways, namely by a) providing credible intervals for the individual additive sub-networks; b) estimating the marginal likelihood to perform an implicit selection of features via an empirical Bayes procedure; and c) facilitating the ranking of feature pairs as candidates for second-order interaction in fine-tuned models. In particular, we develop Laplace-approximated NAMs (LA-NAMs), which show improved empirical performance on tabular datasets and challenging real-world medical tasks.
Cite
Text
Bouchiat et al. "Improving Neural Additive Models with Bayesian Principles." International Conference on Machine Learning, 2024.Markdown
[Bouchiat et al. "Improving Neural Additive Models with Bayesian Principles." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/bouchiat2024icml-improving/)BibTeX
@inproceedings{bouchiat2024icml-improving,
title = {{Improving Neural Additive Models with Bayesian Principles}},
author = {Bouchiat, Kouroche and Immer, Alexander and Yèche, Hugo and Ratsch, Gunnar and Fortuin, Vincent},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {4416-4443},
volume = {235},
url = {https://mlanthology.org/icml/2024/bouchiat2024icml-improving/}
}