Online Continual Learning via Logit Adjusted SoftMax
Abstract
Online continual learning is a challenging problem where models must learn from a non-stationary data stream while avoiding catastrophic forgetting. Inter-class imbalance during training has been identified as a major cause of forgetting, leading to model prediction bias towards recently learned classes. In this paper, we theoretically analyze that inter-class imbalance is entirely attributed to imbalanced class-priors, and the function learned from intra-class intrinsic distributions is the optimal classifier that minimizes the class-balanced error. To that end, we present that a simple adjustment of model logits during training can effectively resist prior class bias and pursue the corresponding optimum. Our proposed method, Logit Adjusted Softmax, can mitigate the impact of inter-class imbalance not only in class-incremental but also in realistic scenarios that sum up class and domain incremental learning, with little additional computational cost. We evaluate our approach on various benchmarks and demonstrate significant performance improvements compared to prior arts. For example, our approach improves the best baseline by 4.6% on CIFAR10.
Cite
Text
Huang et al. "Online Continual Learning via Logit Adjusted SoftMax." Transactions on Machine Learning Research, 2024.Markdown
[Huang et al. "Online Continual Learning via Logit Adjusted SoftMax." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/huang2024tmlr-online/)BibTeX
@article{huang2024tmlr-online,
title = {{Online Continual Learning via Logit Adjusted SoftMax}},
author = {Huang, Zhehao and Li, Tao and Yuan, Chenhe and Wu, Yingwen and Huang, Xiaolin},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/huang2024tmlr-online/}
}