Gaussian Process Neural Additive Models
Abstract
Deep neural networks have revolutionized many fields, but their black-box nature also occasionally prevents their wider adoption in fields such as healthcare and finance where interpretable and explainable models are required. The recent development of Neural Additive Models (NAMs) poses a major step in the direction of interpretable deep learning for tabular datasets. In this paper, we propose a new subclass of NAMs that utilize a single-layer neural network construction of the Gaussian process via random Fourier features, which we call Gaussian Process Neural Additive Models (GP-NAM). GP-NAMs have the advantage of a convex objective function and number of trainable parameters that grows linearly with feature dimensions. It suffers no loss in performance compared with deeper NAM approaches because GPs are well-suited to learning complex non-parametric univariate functions. We demonstrate the performance of GP-NAM on several tabular datasets, showing that it achieves comparable performance in both classification and regression tasks with a massive reduction in the number of parameters.
Cite
Text
Zhang et al. "Gaussian Process Neural Additive Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I15.29628Markdown
[Zhang et al. "Gaussian Process Neural Additive Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-gaussian/) doi:10.1609/AAAI.V38I15.29628BibTeX
@inproceedings{zhang2024aaai-gaussian,
title = {{Gaussian Process Neural Additive Models}},
author = {Zhang, Wei and Barr, Brian and Paisley, John},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {16865-16872},
doi = {10.1609/AAAI.V38I15.29628},
url = {https://mlanthology.org/aaai/2024/zhang2024aaai-gaussian/}
}