Feature Clipping for Uncertainty Calibration
Abstract
Deep neural networks (DNNs) have achieved significant success across various tasks, but ensuring reliable uncertainty estimates, known as model calibration, is crucial for their safe and effective deployment. Modern DNNs often suffer from overconfidence, leading to miscalibration. We propose a novel post-hoc calibration method called feature clipping (FC) to address this issue. FC involves clipping feature values to a specified threshold, effectively increasing entropy in high calibration error samples while maintaining the information in low calibration error samples. This process reduces the overconfidence in predictions, improving the overall calibration of the model. Our extensive experiments on datasets such as CIFAR-10, CIFAR-100, and ImageNet, and models including CNNs and transformers, demonstrate that FC consistently enhances calibration performance. Additionally, we provide a theoretical analysis that validates the effectiveness of our method. As the first calibration technique based on feature modification, feature clipping offers a novel approach to improving model calibration, showing significant improvements over both post-hoc and train-time calibration methods and pioneering a new avenue for feature-based model calibration.
Cite
Text
Tao et al. "Feature Clipping for Uncertainty Calibration." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34297Markdown
[Tao et al. "Feature Clipping for Uncertainty Calibration." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/tao2025aaai-feature/) doi:10.1609/AAAI.V39I19.34297BibTeX
@inproceedings{tao2025aaai-feature,
title = {{Feature Clipping for Uncertainty Calibration}},
author = {Tao, Linwei and Dong, Minjing and Xu, Chang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {20841-20849},
doi = {10.1609/AAAI.V39I19.34297},
url = {https://mlanthology.org/aaai/2025/tao2025aaai-feature/}
}