Tent: Fully Test-Time Adaptation by Entropy Minimization

Abstract

A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions. Our method estimates normalization statistics and optimizes channel-wise affine transformations to update online on each batch. Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C. Tent handles source-free domain adaptation on digit recognition from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to Cityscapes, and on the VisDA-C benchmark. These results are achieved in one epoch of test-time optimization without altering training.

Cite

Text

Wang et al. "Tent: Fully Test-Time Adaptation by Entropy Minimization." International Conference on Learning Representations, 2021.

Markdown

[Wang et al. "Tent: Fully Test-Time Adaptation by Entropy Minimization." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/wang2021iclr-tent/)

BibTeX

@inproceedings{wang2021iclr-tent,
  title     = {{Tent: Fully Test-Time Adaptation by Entropy Minimization}},
  author    = {Wang, Dequan and Shelhamer, Evan and Liu, Shaoteng and Olshausen, Bruno and Darrell, Trevor},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/wang2021iclr-tent/}
}