Learning the Essential in Less than 2k Additional Weights - A Simple Approach to Improve Image Classification Stability Under Corruptions

Abstract

The performance of image classification on well-known benchmarks such as ImageNet is remarkable, but in safety-critical situations, the accuracy often drops significantly under adverse conditions. To counteract these performance drops, we propose a very simple modification to the models: we pre-pend a single, dimension preserving convolutional layer with a large linear kernel whose purpose it is to extract the information that is essential for image classification. We show that our simple modification can increase the robustness against common corruptions significantly, especially for corruptions of high severity. We demonstrate the impact of our channel-specific layers on ImageNet-100 and ImageNette classification tasks and show an increase of up to 30% accuracy on corrupted data in the top1 accuracy. Further, we conduct a set of designed experiments to qualify the conditions for our findings. Our main result is that a data- and network-dependent linear subspace carries the most important classification information (the essential), which our proposed pre-processing layer approximately identifies for most corruptions, and at very low cost.

Cite

Text

Bäuerle et al. "Learning the Essential in Less than 2k Additional Weights - A Simple Approach to Improve Image Classification Stability Under Corruptions." Transactions on Machine Learning Research, 2024.

Markdown

[Bäuerle et al. "Learning the Essential in Less than 2k Additional Weights - A Simple Approach to Improve Image Classification Stability Under Corruptions." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/bauerle2024tmlr-learning/)

BibTeX

@article{bauerle2024tmlr-learning,
  title     = {{Learning the Essential in Less than 2k Additional Weights - A Simple Approach to Improve Image Classification Stability Under Corruptions}},
  author    = {Bäuerle, Kai and Müller, Patrick and Kazim, Syed Muhammad and Ihrke, Ivo and Keuper, Margret},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/bauerle2024tmlr-learning/}
}