Test-Time Training with Masked Autoencoders

Abstract

Test-time training adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision.In this paper, we use masked autoencoders for this one-sample learning problem.Empirically, our simple method improves generalization on many visual benchmarks for distribution shifts.Theoretically, we characterize this improvement in terms of the bias-variance trade-off.

Cite

Text

Gandelsman et al. "Test-Time Training with Masked Autoencoders." Neural Information Processing Systems, 2022.

Markdown

[Gandelsman et al. "Test-Time Training with Masked Autoencoders." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/gandelsman2022neurips-testtime/)

BibTeX

@inproceedings{gandelsman2022neurips-testtime,
  title     = {{Test-Time Training with Masked Autoencoders}},
  author    = {Gandelsman, Yossi and Sun, Yu and Chen, Xinlei and Efros, Alexei},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/gandelsman2022neurips-testtime/}
}