On the Use of Anchoring for Training Vision Models
Abstract
Anchoring is a recent, architecture-agnostic principle for training deep neural networks that has been shown to significantly improve uncertainty estimation, calibration, and extrapolation capabilities. In this paper, we systematically explore anchoring as a general protocol for training vision models, providing fundamental insights into its training and inference processes and their implications for generalization and safety. Despite its promise, we identify a critical problem in anchored training that can lead to an increased risk of learning undesirable shortcuts, thereby limiting its generalization capabilities. To address this, we introduce a new anchored training protocol that employs a simple regularizer to mitigate this issue and significantly enhances generalization. We empirically evaluate our proposed approach across datasets and architectures of varying scales and complexities, demonstrating substantial performance gains in generalization and safety metrics compared to the standard training protocol. The open-source code is available at https://software.llnl.gov/anchoring.
Cite
Text
Narayanaswamy et al. "On the Use of Anchoring for Training Vision Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-3024Markdown
[Narayanaswamy et al. "On the Use of Anchoring for Training Vision Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/narayanaswamy2024neurips-use/) doi:10.52202/079017-3024BibTeX
@inproceedings{narayanaswamy2024neurips-use,
title = {{On the Use of Anchoring for Training Vision Models}},
author = {Narayanaswamy, Vivek and Thopalli, Kowshik and Anirudh, Rushil and Mubarka, Yamen and Sakla, Wesam and Thiagarajan, Jayaraman J.},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3024},
url = {https://mlanthology.org/neurips/2024/narayanaswamy2024neurips-use/}
}