Segment Any Change

Abstract

Visual foundation models have achieved remarkable results in zero-shot image classification and segmentation, but zero-shot change detection remains an open problem. In this paper, we propose the segment any change models (AnyChange), a new type of change detection model that supports zero-shot prediction and generalization on unseen change types and data distributions.AnyChange is built on the segment anything model (SAM) via our training-free adaptation method, bitemporal latent matching.By revealing and exploiting intra-image and inter-image semantic similarities in SAM's latent space, bitemporal latent matching endows SAM with zero-shot change detection capabilities in a training-free way. We also propose a point query mechanism to enable AnyChange's zero-shot object-centric change detection capability.We perform extensive experiments to confirm the effectiveness of AnyChange for zero-shot change detection.AnyChange sets a new record on the SECOND benchmark for unsupervised change detection, exceeding the previous SOTA by up to 4.4\% F$_1$ score, and achieving comparable accuracy with negligible manual annotations (1 pixel per image) for supervised change detection. Code is available at https://github.com/Z-Zheng/pytorch-change-models.

Cite

Text

Zheng et al. "Segment Any Change." Neural Information Processing Systems, 2024. doi:10.52202/079017-2581

Markdown

[Zheng et al. "Segment Any Change." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zheng2024neurips-segment/) doi:10.52202/079017-2581

BibTeX

@inproceedings{zheng2024neurips-segment,
  title     = {{Segment Any Change}},
  author    = {Zheng, Zhuo and Zhong, Yanfei and Zhang, Liangpei and Ermon, Stefano},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2581},
  url       = {https://mlanthology.org/neurips/2024/zheng2024neurips-segment/}
}