Inverse-Reference Priors for Fisher Regularization of Bayesian Neural Networks
Abstract
Recent studies have shown that the generalization ability of deep neural networks (DNNs) is closely related to the Fisher information matrix (FIM) calculated during the early training phase. Several methods have been proposed to regularize the FIM for increased generalization of DNNs. However, they cannot be used directly for Bayesian neural networks (BNNs) because the variable parameters of BNNs make it difficult to calculate the FIM. To address this problem, we achieve regularization of the FIM of BNNs by specifying a new suitable prior distribution called the inverse-reference (IR) prior. To regularize the FIM, the IR prior is derived as the inverse of the reference prior that imposes minimal prior knowledge on the parameters and maximizes the trace of the FIM. We demonstrate that the IR prior can enhance the generalization ability of BNNs for large-scale data over previously used priors while providing adequate uncertainty quantifications using various benchmark image datasets and BNN structures.
Cite
Text
Kim et al. "Inverse-Reference Priors for Fisher Regularization of Bayesian Neural Networks." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I7.25997Markdown
[Kim et al. "Inverse-Reference Priors for Fisher Regularization of Bayesian Neural Networks." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/kim2023aaai-inverse/) doi:10.1609/AAAI.V37I7.25997BibTeX
@inproceedings{kim2023aaai-inverse,
title = {{Inverse-Reference Priors for Fisher Regularization of Bayesian Neural Networks}},
author = {Kim, Keunseo and Ma, Eun-Yeol and Choi, Jeongman and Kim, Heeyoung},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {8264-8272},
doi = {10.1609/AAAI.V37I7.25997},
url = {https://mlanthology.org/aaai/2023/kim2023aaai-inverse/}
}