Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment
Abstract
Real-world data distributions are often highly skewed. This has spurred a growing body of research on long-tailed recognition, aimed at addressing the imbalance in training classification models. Among the methods studied, multiplicative logit adjustment (MLA) stands out as a simple and effective method. What theoretical foundation explains the effectiveness of this heuristic method? We provide a justification for the effectiveness of MLA with the following two-step process. First, we develop a theory that adjusts optimal decision boundaries by estimating feature spread on the basis of neural collapse. Second, we demonstrate that MLA approximates this optimal method. Additionally, through experiments on long-tailed datasets, we illustrate the practical usefulness of MLA under more realistic conditions. We also offer experimental insights to guide the tuning of MLA hyperparameters.
Cite
Text
Hasegawa and Sato. "Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment." International Conference on Learning Representations, 2025.Markdown
[Hasegawa and Sato. "Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/hasegawa2025iclr-multiplicative/)BibTeX
@inproceedings{hasegawa2025iclr-multiplicative,
title = {{Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment}},
author = {Hasegawa, Naoya and Sato, Issei},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/hasegawa2025iclr-multiplicative/}
}