An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis
Abstract
Self-supervised pre-training appears as an advantageous alternative to supervised pre-trained for transfer learning. By synthesizing annotations on pretext tasks, self-supervision allows to pre-train models on large amounts of pseudo-labels before fine-tuning them on the target task. In this work, we assess self-supervision for the diagnosis of skin lesions, comparing three self-supervised pipelines to a challenging supervised baseline, on five test datasets comprising in- and out-of-distribution samples. Our results show that self-supervision is competitive both in improving accuracies and in reducing the variability of outcomes. Self-supervision proves particularly useful for low training data scenarios ($<1\,500$ and $<150$ samples), where its ability to stabilize the outcomes is essential to provide sound results.
Cite
Text
Chaves et al. "An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25069-9_11Markdown
[Chaves et al. "An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/chaves2022eccvw-evaluation/) doi:10.1007/978-3-031-25069-9_11BibTeX
@inproceedings{chaves2022eccvw-evaluation,
title = {{An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis}},
author = {Chaves, Levy G. and Bissoto, Alceu and Valle, Eduardo and Avila, Sandra},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {150-166},
doi = {10.1007/978-3-031-25069-9_11},
url = {https://mlanthology.org/eccvw/2022/chaves2022eccvw-evaluation/}
}