YoooP: You Only Optimize One Prototype per Class for Non-Exemplar Incremental Learning

Abstract

Incremental learning (IL) usually addresses catastrophic forgetting of old tasks when learning new tasks by replaying old tasks' raw data stored in memory, which can be limited by its size and the risk of privacy leakage. Recent non-exemplar IL methods store class centroids as prototypes and perturb them with high-dimensional Gaussian noise to generate synthetic data for replaying. Unfortunately, this approach has two major limitations. First, the boundary between embedding clusters around prototypes of different classes might be unclear, leading to serious catastrophic forgetting. Second, directly applying high-dimensional Gaussian noise produces nearly identical synthetic samples that fail to preserve the true data distribution, ultimately degrading performance. In this paper, we propose YoooP, a novel exemplar-free IL approach that can greatly outperform previous methods by only storing and replaying one prototype per class even without synthetic data replay. Instead of merely storing class centroids, YoooP optimizes each prototype by (1) shifting it to high-density regions within each class using an attentional mean-shift algorithm, and (2) optimizing its cosine similarity with class-specific embeddings to form compact, well-separated clusters. As a result, replaying only the optimized prototypes effectively reduces inter-class interference and maintains clear decision boundaries. Furthermore, we extend YoooP to YoooP+ by synthesizing replay data preserving the angular distribution between each class prototype and the class's real data in history, which cannot be obtained by high-dimensional Gaussian perturbation. YoooP+ effectively stabilizes and further improves YoooP without storing real data. Extensive experiments demonstrate the superiority of YoooP/YoooP+ over non-exemplar baselines in terms of different metrics. The code is released at https://github.com/Snowball0823/YoooP.git.

Cite

Text

Kong et al. "YoooP: You Only Optimize One Prototype per Class for Non-Exemplar Incremental Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Kong et al. "YoooP: You Only Optimize One Prototype per Class for Non-Exemplar Incremental Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kong2025tmlr-yooop/)

BibTeX

@article{kong2025tmlr-yooop,
  title     = {{YoooP: You Only Optimize One Prototype per Class for Non-Exemplar Incremental Learning}},
  author    = {Kong, Jiangtao and Zong, Zhenyu and Zhou, Tianyi and Shao, Huajie},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/kong2025tmlr-yooop/}
}