Single Image Test-Time Adaptation for Segmentation
Abstract
Test-Time Adaptation methods improve domain shift robustness of deep neural networks. We explore the adaptation of segmentation models to a single unlabelled image with no other data available at test time. This allows individual sample performance analysis while excluding orthogonal factors such as weight restart strategies. We propose two new segmentation \ac{tta} methods and compare them to established baselines and recent state-of-the-art. The methods are first validated on synthetic domain shifts and then tested on real-world datasets. The analysis highlights that simple modifications such as the choice of the loss function can greatly improve the performance of standard baselines and that different methods and hyper-parameters are optimal for different kinds of domain shift, hindering the development of fully general methods applicable in situations where no prior knowledge about the domain shift is assumed.
Cite
Text
Janouskova et al. "Single Image Test-Time Adaptation for Segmentation." Transactions on Machine Learning Research, 2024.Markdown
[Janouskova et al. "Single Image Test-Time Adaptation for Segmentation." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/janouskova2024tmlr-single/)BibTeX
@article{janouskova2024tmlr-single,
title = {{Single Image Test-Time Adaptation for Segmentation}},
author = {Janouskova, Klara and Shor, Tamir and Baskin, Chaim and Matas, Jiri},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/janouskova2024tmlr-single/}
}