Parabolic Continual Learning
Abstract
Regularizing continual learning techniques is important for anticipating algorithmic behavior under new realizations of data. We introduce a new approach to continual learning by imposing the properties of a parabolic partial differential equation (PDE) to regularize the expected behavior of the loss over time. This class of parabolic PDEs has a number of favorable properties that allow us to analyze the error incurred through forgetting and the error induced through generalization. Specifically, we do this through imposing boundary conditions where the boundary is given by a memory buffer. By using the memory buffer as a boundary, we can enforce long term dependencies by bounding the expected error by the boundary loss. Finally, we illustrate the empirical performance of the method on a series of continual learning tasks.
Cite
Text
Yang et al. "Parabolic Continual Learning." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Yang et al. "Parabolic Continual Learning." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/yang2025aistats-parabolic/)BibTeX
@inproceedings{yang2025aistats-parabolic,
title = {{Parabolic Continual Learning}},
author = {Yang, Haoming and Hasan, Ali and Tarokh, Vahid},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {2620-2628},
volume = {258},
url = {https://mlanthology.org/aistats/2025/yang2025aistats-parabolic/}
}