Exploring the Potential of Direct Feedback Alignment for Continual Learning

Abstract

Real-world applications of machine learning require robustness to shifts in the data distribution over time. A critical limitation of standard artificial neural networks trained with backpropagation (BP) is their susceptibility to catastrophic forgetting: they “forget” prior knowledge when trained on a new task, while biological neural networks tend to be more robust to catastrophic forgetting. While various algorithmic ways of mitigating catastrophic forgetting have been proposed, developing an optimization algorithm that is capable of learning continuously remains an open problem. Motivated by recent theoretical results, here we explore whether a biologically inspired learning algorithm like Direct Feedback Align- ment (DFA) can mitigate catastrophic forgetting in artificial neural networks. We train fully-connected networks on several continual learning benchmarks using DFA and compare its performance to vanilla backpropagation, random features, and other continual learning algorithms. We find that an inherent bias of DFA, called “degeneracy breaking”, leads to low average forgetting on common continual learning benchmarks when using DFA in the Domain-Incremental and the Task-Incremental learning scenarios. We show how to control the trade-off between learning and forgetting with DFA, and relate different modes of using DFA to other methods in the field.

Cite

Text

Folchini et al. "Exploring the Potential of Direct Feedback Alignment for Continual Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Folchini et al. "Exploring the Potential of Direct Feedback Alignment for Continual Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/folchini2025tmlr-exploring/)

BibTeX

@article{folchini2025tmlr-exploring,
  title     = {{Exploring the Potential of Direct Feedback Alignment for Continual Learning}},
  author    = {Folchini, Sara and Arora, Viplove and Goldt, Sebastian},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/folchini2025tmlr-exploring/}
}