TransBoost: Improving the Best ImageNet Performance Using Deep Transduction
Abstract
This paper deals with deep transductive learning, and proposes TransBoost as a procedure for fine-tuning any deep neural model to improve its performance on any (unlabeled) test set provided at training time. TransBoost is inspired by a large margin principle and is efficient and simple to use. Our method significantly improves the ImageNet classification performance on a wide range of architectures, such as ResNets, MobileNetV3-L, EfficientNetB0, ViT-S, and ConvNext-T, leading to state-of-the-art transductive performance.Additionally we show that TransBoost is effective on a wide variety of image classification datasets. The implementation of TransBoost is provided at: https://github.com/omerb01/TransBoost .
Cite
Text
Belhasin et al. "TransBoost: Improving the Best ImageNet Performance Using Deep Transduction." Neural Information Processing Systems, 2022.Markdown
[Belhasin et al. "TransBoost: Improving the Best ImageNet Performance Using Deep Transduction." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/belhasin2022neurips-transboost/)BibTeX
@inproceedings{belhasin2022neurips-transboost,
title = {{TransBoost: Improving the Best ImageNet Performance Using Deep Transduction}},
author = {Belhasin, Omer and Bar-Shalom, Guy and El-Yaniv, Ran},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/belhasin2022neurips-transboost/}
}