CoralSRT: Revisiting Coral Reef Semantic Segmentation by Feature Rectification via Self-Supervised Guidance

Abstract

We investigate coral reef semantic segmentation, in which coral reefs are governed by multifaceted factors, like genes, environmental changes, and internal interactions. Unlike segmenting structural units/instances, which are predictable and follow a set pattern, also referred to as commonsense or prior, segmenting coral reefs involves modeling self-repeated, asymmetric, and amorphous distribution of elements, e.g., corals can grow in almost any shape and appearance. We revisited existing segmentation approaches and found that both computer vision and coral reef communities failed to incorporate the intrinsic properties of corals into model design. In this work, we propose a simple formulation for coral reef semantic segmentation: we regard the segment as the basis to model both within-segment and cross-segment affinities. We propose CoralSRT, a feature rectification module via self-supervised guidance, to reduce the stochasticity of coral features extracted by powerful foundation models (FMs), as demonstrated in Fig. 1. We incorporate intrinsic properties of corals to strengthen within-segment affinity by guiding the features within generated segments to align with the centrality. We investigate features from FMs that were optimized by various pretext tasks on significantly large-scale unlabeled or labeled data, which already contain rich information for modeling both within-segment and cross-segment affinities, enabling the adaptation of FMs for coral segmentation. CoralSRT can rectify features from FMs to more efficient features for label propagation and lead to further significant semantic segmentation performance gains, all without requiring additional human supervision, retraining/finetuning FMs or even domain-specific data. These advantages help reduce human effort and the need for domain expertise in data collection and labeling. Our method is easy to implement, and also task- and model-agnostic. CoralSRT bridges the self-supervised pre-training and supervised training in the feature space, also offering insights for segmenting elements/stuffs (e.g., grass, plants, cells, and biofoulings).

Cite

Text

Zheng et al. "CoralSRT: Revisiting Coral Reef Semantic Segmentation by Feature Rectification via Self-Supervised Guidance." International Conference on Computer Vision, 2025.

Markdown

[Zheng et al. "CoralSRT: Revisiting Coral Reef Semantic Segmentation by Feature Rectification via Self-Supervised Guidance." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zheng2025iccv-coralsrt/)

BibTeX

@inproceedings{zheng2025iccv-coralsrt,
  title     = {{CoralSRT: Revisiting Coral Reef Semantic Segmentation by Feature Rectification via Self-Supervised Guidance}},
  author    = {Zheng, Ziqiang and Wong, Yuk-Kwan and Hua, Binh-Son and Shi, Jianbo and Yeung, Sai-Kit},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {19967-19977},
  url       = {https://mlanthology.org/iccv/2025/zheng2025iccv-coralsrt/}
}