Scale Equalization for Multi-Level Feature Fusion

Abstract

Deep neural networks have exhibited remarkable performance in a variety of computer vision fields, especially in semantic segmentation tasks. Their success is often attributed to multi-level feature fusion, which enables them to understand both global and local information from an image. However, multi-level features from parallel branches exhibits different scales, which is a universal and unwanted flaw that leads to detrimental gradient descent, thereby degrading performance in semantic segmentation. We discover that scale disequilibrium is caused by bilinear upsampling, which is supported by both theoretical and empirical evidence. Based on this observation, we propose injecting scale equalizers to achieve scale equilibrium across multi-level features after bilinear upsampling. Our proposed scale equalizers are easy to implement, applicable to any architecture, hyperparameter-free, implementable without requiring extra computational cost, and guarantee scale equilibrium for any dataset. Experiments showed that adopting scale equalizers consistently improved the mIoU index across various target datasets, including ADE20K, PASCAL VOC 2012, and Cityscapes, as well as various decoder choices, including UPerHead, PSPHead, ASPPHead, SepASPPHead, and FCNHead.

Cite

Text

Kim and Kim. "Scale Equalization for Multi-Level Feature Fusion." Transactions on Machine Learning Research, 2024.

Markdown

[Kim and Kim. "Scale Equalization for Multi-Level Feature Fusion." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/kim2024tmlr-scale/)

BibTeX

@article{kim2024tmlr-scale,
  title     = {{Scale Equalization for Multi-Level Feature Fusion}},
  author    = {Kim, Bum Jun and Kim, Sang Woo},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/kim2024tmlr-scale/}
}