Dual PatchNorm

Abstract

We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. We demonstrate that Dual PatchNorm outperforms the result of exhaustive search for alternative LayerNorm placement strategies in the Transformer block itself. In our experiments on image classification and contrastive learning, incorporating this trivial modification, often leads to improved accuracy over well-tuned vanilla Vision Transformers and never hurts.

Cite

Text

Kumar et al. "Dual PatchNorm." Transactions on Machine Learning Research, 2023.

Markdown

[Kumar et al. "Dual PatchNorm." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/kumar2023tmlr-dual/)

BibTeX

@article{kumar2023tmlr-dual,
  title     = {{Dual PatchNorm}},
  author    = {Kumar, Manoj and Dehghani, Mostafa and Houlsby, Neil},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/kumar2023tmlr-dual/}
}