Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection

Abstract

Pre-trained vision-language models have exhibited remarkable abilities in detecting out-of-distribution (OOD) samples. However, some challenging OOD samples, which lie close to in-distribution (InD) data in image feature space, can still lead to misclassification. The emergence of foundation models like diffusion models and multimodal large language models (MLLMs) offers a potential solution to this issue. In this work, we propose SynOOD, a novel approach that harnesses foundation models to generate synthetic, challenging OOD data for fine-tuning CLIP models, thereby enhancing boundary-level discrimination between InD and OOD samples. Our method uses an iterative in-painting process guided by contextual prompts from MLLMs to produce nuanced, boundary-aligned OOD samples. These samples are refined through noise adjustments based on gradients from OOD scores like the energy score, effectively sampling from the InD/OOD boundary. With these carefully synthesized images, we fine-tune the CLIP image encoder and negative label features derived from the text encoder to strengthen connections between near-boundary OOD samples and a set of negative labels. Finally, SynOOD achieves state-of-the-art performance on the large-scale ImageNet benchmark, with minimal increases in parameters and runtime. Our approach significantly surpasses existing methods, improving AUROC by 2.80% and reducing FPR95 by 11.13%. Codes are available in https://github.com/Jarvisgivemeasuit/SynOOD.

Cite

Text

Li et al. "Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection." International Conference on Computer Vision, 2025.

Markdown

[Li et al. "Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/li2025iccv-synthesizing/)

BibTeX

@inproceedings{li2025iccv-synthesizing,
  title     = {{Synthesizing Near-Boundary OOD Samples for Out-of-Distribution Detection}},
  author    = {Li, Jinglun and Jiang, Kaixun and Chen, Zhaoyu and Lin, Bo and Tang, Yao and Ge, Weifeng and Zhang, Wenqiang},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {4496-4506},
  url       = {https://mlanthology.org/iccv/2025/li2025iccv-synthesizing/}
}