RETSim: Resilient and Efficient Text Similarity
Abstract
This paper introduces RETSim (Resilient and Efficient Text Similarity), a lightweight, multilingual deep learning model trained to produce robust metric embeddings for near-duplicate text retrieval, clustering, and dataset deduplication tasks. We demonstrate that RETSim is significantly more robust and accurate than MinHash and neural text embeddings, achieving new state-of-the-art performance on dataset deduplication, adversarial text retrieval benchmarks, and spam clustering tasks. Additionally, we introduce the W4NT3D benchmark (Wiki-40B 4dversarial Near-T3xt Dataset), enabling the evaluation of models on typo-laden near-duplicate text retrieval in a multilingual setting. RETSim and the W4NT3D benchmark are released under the MIT License at https://github.com/google/unisim.
Cite
Text
Zhang et al. "RETSim: Resilient and Efficient Text Similarity." International Conference on Learning Representations, 2024.Markdown
[Zhang et al. "RETSim: Resilient and Efficient Text Similarity." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/zhang2024iclr-retsim/)BibTeX
@inproceedings{zhang2024iclr-retsim,
title = {{RETSim: Resilient and Efficient Text Similarity}},
author = {Zhang, Marina and Vallis, Owen Skipper and Bumin, Aysegul and Vakharia, Tanay and Bursztein, Elie},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/zhang2024iclr-retsim/}
}