FOCUS: Towards Universal Foreground Segmentation

Abstract

Foreground segmentation is a fundamental task in computer vision, encompassing various subdivision tasks. Previous research has typically designed task-specific architectures for each task, leading to a lack of unification. Moreover, they primarily focus on recognizing foreground objects without effectively distinguishing them from the background. In this paper, we emphasize the importance of the background and its relationship with the foreground. We introduce FOCUS, the Foreground ObjeCts Universal Segmentation framework that can handle multiple foreground tasks. We develop a multi-scale semantic network using the edge information of objects to enhance image features. To achieve boundary-aware segmentation, we propose a novel distillation method, integrating the contrastive learning strategy to refine the prediction mask in multi-modal feature space. We conduct extensive experiments on a total of 13 datasets across 5 tasks, and the results demonstrate that FOCUS consistently outperforms the state-of-the-art task-specific models on most metrics.

Cite

Text

You et al. "FOCUS: Towards Universal Foreground Segmentation." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I9.33038

Markdown

[You et al. "FOCUS: Towards Universal Foreground Segmentation." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/you2025aaai-focus/) doi:10.1609/AAAI.V39I9.33038

BibTeX

@inproceedings{you2025aaai-focus,
  title     = {{FOCUS: Towards Universal Foreground Segmentation}},
  author    = {You, Zuyao and Kong, Lingyu and Meng, Lingchen and Wu, Zuxuan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {9580-9588},
  doi       = {10.1609/AAAI.V39I9.33038},
  url       = {https://mlanthology.org/aaai/2025/you2025aaai-focus/}
}