From Deep Additive Kernel Learning to Last-Layer Bayesian Neural Networks via Induced Prior Approximation
Abstract
With the strengths of both deep learning and kernel methods like Gaussian Processes (GPs), Deep Kernel Learning (DKL) has gained considerable attention in recent years. From the computational perspective, however, DKL becomes challenging when the input dimension of the GP layer is high. To address this challenge, we propose the Deep Additive Kernel (DAK) model, which incorporates i) an additive structure for the last-layer GP; and ii) induced prior approximation for each GP unit. This naturally leads to a last-layer Bayesian neural network (BNN) architecture. The proposed method enjoys the interpretability of DKL as well as the computational advantages of BNN. Empirical results show that the proposed approach outperforms state-of-the-art DKL methods in both regression and classification tasks.
Cite
Text
Zhao et al. "From Deep Additive Kernel Learning to Last-Layer Bayesian Neural Networks via Induced Prior Approximation." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Zhao et al. "From Deep Additive Kernel Learning to Last-Layer Bayesian Neural Networks via Induced Prior Approximation." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/zhao2025aistats-deep/)BibTeX
@inproceedings{zhao2025aistats-deep,
title = {{From Deep Additive Kernel Learning to Last-Layer Bayesian Neural Networks via Induced Prior Approximation}},
author = {Zhao, Wenyuan and Chen, Haoyuan and Liu, Tie and Tuo, Rui and Tian, Chao},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {4231-4239},
volume = {258},
url = {https://mlanthology.org/aistats/2025/zhao2025aistats-deep/}
}