Retrospective Feature Estimation for Continual Learning
Abstract
The intrinsic capability to continuously learn a changing data stream is a desideratum of deep neural networks (DNNs). However, current DNNs suffer from catastrophic forgetting, which interferes with remembering past knowledge. To mitigate this issue, existing Continual Learning (CL) approaches often retain exemplars for replay, regularize learning, or allocate dedicated capacity for new tasks. This paper investigates an unexplored direction for CL called Retrospective Feature Estimation (RFE). RFE learns to reverse feature changes by aligning the features from the current trained DNN backward to the feature space of the old task, where performing predictions is easier. This retrospective process utilizes a chain of small feature mapping networks called retrospector modules. Empirical experiments on several CL benchmarks, including CIFAR10, CIFAR100, and Tiny ImageNet, demonstrate the effectiveness and potential of this novel CL direction compared to existing representative CL methods, motivating further research into retrospective mechanisms as a principled alternative for mitigating catastrophic forgetting in CL. Code is available at: https://github.com/mail-research/retrospective-feature-estimation.
Cite
Text
Nguyen et al. "Retrospective Feature Estimation for Continual Learning." Transactions on Machine Learning Research, 2026.Markdown
[Nguyen et al. "Retrospective Feature Estimation for Continual Learning." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/nguyen2026tmlr-retrospective/)BibTeX
@article{nguyen2026tmlr-retrospective,
title = {{Retrospective Feature Estimation for Continual Learning}},
author = {Nguyen, Nghia D. and Nguyen, Hieu Trung and Li, Ang and Pham, Hoang and Nguyen, Viet Anh and Doan, Khoa D},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/nguyen2026tmlr-retrospective/}
}