Discovering Global False Negatives on the Fly for Self-Supervised Contrastive Learning

Abstract

In self-supervised contrastive learning, negative pairs are typically constructed using an anchor image and a sample drawn from the entire dataset, excluding the anchor. However, this approach can result in the creation of negative pairs with similar semantics, referred to as "false negatives", leading to their embeddings being falsely pushed apart. To address this issue, we introduce GloFND, an optimization-based approach that automatically learns on the fly the threshold for each anchor data to identify its false negatives during training. In contrast to previous methods for false negative discovery, our approach globally detects false negatives across the entire dataset rather than locally within the mini-batch. Moreover, its per-iteration computation cost remains independent of the dataset size. Experimental results on image and image-text data demonstrate the effectiveness of the proposed method. Our implementation is available at https://github.com/vibalcam/GloFND.

Cite

Text

Balmaseda et al. "Discovering Global False Negatives on the Fly for Self-Supervised Contrastive Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Balmaseda et al. "Discovering Global False Negatives on the Fly for Self-Supervised Contrastive Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/balmaseda2025icml-discovering/)

BibTeX

@inproceedings{balmaseda2025icml-discovering,
  title     = {{Discovering Global False Negatives on the Fly for Self-Supervised Contrastive Learning}},
  author    = {Balmaseda, Vicente and Wang, Bokun and Lin, Ching-Long and Yang, Tianbao},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {2697-2714},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/balmaseda2025icml-discovering/}
}