SINDER: Repairing the Singular Defects of DINOv2
Abstract
Vision Transformer models trained on large-scale datasets, although effective, often exhibit artifacts in the patch token they extract. While such defects can be alleviated by re-training the entire model with additional classification tokens, the underlying reasons for the presence of these tokens remain unclear. In this paper, we conduct a thorough investigation of this phenomenon, combining theoretical analysis with empirical observations. Our findings reveal that these artifacts originate from the pre-trained network itself, specifically stemming from the leading left singular vector of the network’s weights. Furthermore, to mitigate these defects, we propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset, thereby avoiding the need for complete re-training. We validate our method on various downstream tasks, including unsupervised segmentation, classification, supervised segmentation, and depth estimation, demonstrating its effectiveness in improving model performance. Codes and checkpoints are available at https://github.com/haoqiwang/sinder.
Cite
Text
Wang et al. "SINDER: Repairing the Singular Defects of DINOv2." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72667-5_2Markdown
[Wang et al. "SINDER: Repairing the Singular Defects of DINOv2." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/wang2024eccv-sinder/) doi:10.1007/978-3-031-72667-5_2BibTeX
@inproceedings{wang2024eccv-sinder,
title = {{SINDER: Repairing the Singular Defects of DINOv2}},
author = {Wang, Haoqi and Zhang, Tong and Salzmann, Mathieu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72667-5_2},
url = {https://mlanthology.org/eccv/2024/wang2024eccv-sinder/}
}