Extending and Analyzing Self-Supervised Learning Across Domains
Abstract
Self-supervised representation learning has achieved impressive results in recent years, with experiments primarily coming on ImageNet or other similarly large internet imagery datasets. There has been little to no work with these methods on other smaller domains, such as satellite, textural, or biological imagery. We experiment with several popular methods on an unprecedented variety of domains. We discover, among other findings, that Rotation is the most semantically meaningful task, while much of the performance of Jigsaw is attributable to the nature of its induced distribution rather than semantic understanding. Additionally, there are several areas, such as fine-grain classification, where all tasks underperform. We quantitatively and qualitatively diagnose the reasons for these failures and successes via novel experiments studying pretext generalization, random labelings, and implicit dimensionality. Code and models are available at https://github.com/BramSW/Extending_SSRL_Across_Domains/.
Cite
Text
Wallace and Hariharan. "Extending and Analyzing Self-Supervised Learning Across Domains." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58574-7_43Markdown
[Wallace and Hariharan. "Extending and Analyzing Self-Supervised Learning Across Domains." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/wallace2020eccv-extending/) doi:10.1007/978-3-030-58574-7_43BibTeX
@inproceedings{wallace2020eccv-extending,
title = {{Extending and Analyzing Self-Supervised Learning Across Domains}},
author = {Wallace, Bram and Hariharan, Bharath},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58574-7_43},
url = {https://mlanthology.org/eccv/2020/wallace2020eccv-extending/}
}