SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation
Abstract
Adapting a trained model to perform satisfactorily on continually changing test environments is an important and challenging task. In this work, we propose a novel framework, SANTA, which aims to satisfy the following characteristics required for online adaptation: 1) can work effectively for different (even small) batch sizes; 2) should continue to work well on the source domain; 3) should have minimal tunable hyperparameters and storage requirements. Given a pre-trained network trained on source domain data, the proposed framework modifies the affine parameters of the batch normalization layers using source anchoring based self-distillation. This ensures that the model incorporates knowledge from the newly encountered domains, without catastrophically forgetting the previously seen domains. We also propose a source-prototype driven contrastive alignment to ensure natural grouping of the target samples, while maintaining the already learnt semantic information. Extensive evaluation on three benchmark datasets under challenging settings justify the effectiveness of SANTA for real-world applications. Code here: https://github.com/goirik-chakrabarty/SANTA
Cite
Text
Chakrabarty et al. "SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation." Transactions on Machine Learning Research, 2023.Markdown
[Chakrabarty et al. "SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/chakrabarty2023tmlr-santa/)BibTeX
@article{chakrabarty2023tmlr-santa,
title = {{SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation}},
author = {Chakrabarty, Goirik and Sreenivas, Manogna and Biswas, Soma},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/chakrabarty2023tmlr-santa/}
}