MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption

Abstract

An unresolved problem in Deep Learning is the ability of neural networks to cope with domain shifts during test-time, imposed by commonly fixing network parameters after training. Our proposed method Meta Test-Time Training (MT3), however, breaks this paradigm and enables adaption at test-time. We combine meta-learning, self-supervision and test-time training to learn to adapt to unseen test distributions. By minimizing the self-supervised loss, we learn task-specific model parameters for different tasks. A meta-model is optimized such that its adaption to the different task-specific models leads to higher performance on those tasks. During test-time a single unlabeled image is sufficient to adapt the meta-model parameters. This is achieved by minimizing only the self-supervised loss component resulting in a better prediction for that image. Our approach significantly improves the state-of-the-art results on the CIFAR-10-Corrupted image classification benchmark.

Cite

Text

Bartler et al. " MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption ." Artificial Intelligence and Statistics, 2022.

Markdown

[Bartler et al. " MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption ." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/bartler2022aistats-mt3/)

BibTeX

@inproceedings{bartler2022aistats-mt3,
  title     = {{ MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption }},
  author    = {Bartler, Alexander and Bühler, Andre and Wiewel, Felix and Döbler, Mario and Yang, Bin},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {3080-3090},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/bartler2022aistats-mt3/}
}