Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration

Abstract

Neural network calibration is an essential task in deep learning to ensure consistency between the confidence of model prediction and the true correctness likelihood. In this paper, we propose a new post-processing calibration method called $\textbf{Neural Clamping}$, which employs a simple joint input-output transformation on a pre-trained classifier via a learnable universal input perturbation and an output temperature scaling parameter. Moreover, we provide theoretical explanations on why Neural Clamping is provably better than temperature scaling. Evaluated on BloodMNIST, CIFAR-100, and ImageNet image recognition datasets and a variety of deep neural network models, our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods. The code is available at github.com/yungchentang/NCToolkit, and the demo is available at huggingface.co/spaces/TrustSafeAI/NCTV.

Cite

Text

Tang et al. "Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration." Transactions on Machine Learning Research, 2024.

Markdown

[Tang et al. "Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/tang2024tmlr-neural/)

BibTeX

@article{tang2024tmlr-neural,
  title     = {{Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration}},
  author    = {Tang, Yung-Chen and Chen, Pin-Yu and Ho, Tsung-Yi},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/tang2024tmlr-neural/}
}