Calibrating Transformers via Sparse Gaussian Processes
Abstract
Transformer models have achieved profound success in prediction tasks in a wide range of applications in natural language processing, speech recognition and computer vision. Extending Transformer’s success to safety-critical domains requires calibrated uncertainty estimation which remains under-explored. To address this, we propose Sparse Gaussian Process attention (SGPA), which performs Bayesian inference directly in the output space of multi-head attention blocks (MHAs) in transformer to calibrate its uncertainty. It replaces the scaled dot-product operation with a valid symmetric kernel and uses sparse Gaussian processes (SGP) techniques to approximate the posterior processes of MHA outputs. Empirically, on a suite of prediction tasks on text, images and graphs, SGPA-based Transformers achieve competitive predictive accuracy, while noticeably improving both in-distribution calibration and out-of-distribution robustness and detection.
Cite
Text
Chen and Li. "Calibrating Transformers via Sparse Gaussian Processes." International Conference on Learning Representations, 2023.Markdown
[Chen and Li. "Calibrating Transformers via Sparse Gaussian Processes." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/chen2023iclr-calibrating/)BibTeX
@inproceedings{chen2023iclr-calibrating,
title = {{Calibrating Transformers via Sparse Gaussian Processes}},
author = {Chen, Wenlong and Li, Yingzhen},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/chen2023iclr-calibrating/}
}