Automatic Domain Adaptation by Transformers in In-Context Learning

Abstract

Selecting or designing an appropriate domain adaptation algorithm for a given problem remains challenging. This paper presents a Transformer model that can provably approximate and opt for domain adaptation methods for a given dataset in the in-context learning framework, where a foundation model performs new tasks without updating its parameters at test time. Specifically, we prove that (i) Transformers can approximate instance-based and feature-based unsupervised domain adaptation algorithms, and (ii) automatically select the approximated algorithms suited for a given dataset. Numerical results indicate that in-context learning demonstrates an adaptive domain adaptation surpassing existing methods.

Cite

Text

Hataya et al. "Automatic Domain Adaptation by Transformers in In-Context Learning." ICML 2024 Workshops: ICL, 2024.

Markdown

[Hataya et al. "Automatic Domain Adaptation by Transformers in In-Context Learning." ICML 2024 Workshops: ICL, 2024.](https://mlanthology.org/icmlw/2024/hataya2024icmlw-automatic/)

BibTeX

@inproceedings{hataya2024icmlw-automatic,
  title     = {{Automatic Domain Adaptation by Transformers in In-Context Learning}},
  author    = {Hataya, Ryuichiro and Matsui, Kota and Imaizumi, Masaaki},
  booktitle = {ICML 2024 Workshops: ICL},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/hataya2024icmlw-automatic/}
}