Diagnosing Pretrained Models for Out-of-Distribution Detection
Abstract
This work questions a common assumption of OOD detection, that models with higher in-distribution (ID) accuracy tend to have better OOD performance. Recent findings show this assumption doesn't always hold. A direct observation is that the later version of torchvision models improves ID accuracy but suffers from a significant drop in OOD performance. We systematically diagnose torchvision training recipes and explain this effect by analyzing the maximal logits of ID and OOD samples. We then propose post-hoc and training-time solutions to mitigate the OOD decrease by fixing problematic augmentations in torchvision recipes. Both solutions enhance OOD detection and maintain strong ID performance.
Cite
Text
Xiong et al. "Diagnosing Pretrained Models for Out-of-Distribution Detection." International Conference on Computer Vision, 2025.Markdown
[Xiong et al. "Diagnosing Pretrained Models for Out-of-Distribution Detection." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/xiong2025iccv-diagnosing/)BibTeX
@inproceedings{xiong2025iccv-diagnosing,
title = {{Diagnosing Pretrained Models for Out-of-Distribution Detection}},
author = {Xiong, Haipeng and Xu, Kai and Yao, Angela},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {1836-1845},
url = {https://mlanthology.org/iccv/2025/xiong2025iccv-diagnosing/}
}