Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset
Abstract
Neural networks are most often trained under the assumption that data come from a stationary distribution. However, settings in which this assumption is violated are of increasing importance; examples include supervised learning with distributional shifts, reinforcement learning, continual learning and non-stationary contextual bandits. Here, we introduce a novel learning approach that automatically models and adapts to non-stationarity by linking parameters through an Ornstein-Uhlenbeck process with an adaptive drift parameter. The adaptive drift draws the parameters towards the distribution used at initialisation, so the approach can be understood as a form of soft parameter reset. We show empirically that our approach performs well in non-stationary supervised, and off-policy reinforcement learning settings.
Cite
Text
Galashov et al. "Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset." Neural Information Processing Systems, 2024. doi:10.52202/079017-2647Markdown
[Galashov et al. "Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/galashov2024neurips-nonstationary/) doi:10.52202/079017-2647BibTeX
@inproceedings{galashov2024neurips-nonstationary,
title = {{Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset}},
author = {Galashov, Alexandre and Titsias, Michalis K. and György, András and Lyle, Clare and Pascanu, Razvan and Teh, Yee Whye and Sahani, Maneesh},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2647},
url = {https://mlanthology.org/neurips/2024/galashov2024neurips-nonstationary/}
}