A Unified Interpretation of Training-Time Out-of-Distribution Detection

Abstract

This paper explains training-time out-of-distribution (OOD) detection from a novel view, i.e., interactions between different input variables of deep neural networks (DNNs). Specifically, we provide a unified understanding of the effectiveness of current training-time OOD detection methods, i.e., DNNs trained with these methods all encode more complex interactions for inference than those trained without training-time methods, which contributes to their superior OOD detection performance. We further conduct thorough empirical analyses and verify that complex interactions play a primary role in OOD detection, by developing a simple-yet-efficient method to force the DNN to learn interactions of specific complexities and evaluate the change of OOD detection performances. Besides, we also use interactions to investigate why near-OOD samples are more difficult to distinguish from in-distribution (ID) samples than far-OOD samples, mainly because compared to far-OOD samples, the distribution of interactions in near-OOD samples is more similar to that of ID samples. Moreover, we discover that training-time OOD detection methods can effectively decrease such similarities.

Cite

Text

Cheng et al. "A Unified Interpretation of Training-Time Out-of-Distribution Detection." International Conference on Computer Vision, 2025.

Markdown

[Cheng et al. "A Unified Interpretation of Training-Time Out-of-Distribution Detection." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/cheng2025iccv-unified/)

BibTeX

@inproceedings{cheng2025iccv-unified,
  title     = {{A Unified Interpretation of Training-Time Out-of-Distribution Detection}},
  author    = {Cheng, Xu and Jiang, Xin and Li, Zechao},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {2142-2151},
  url       = {https://mlanthology.org/iccv/2025/cheng2025iccv-unified/}
}