A Theory of Initialisation's Impact on Specialisation
Abstract
Prior work has demonstrated a consistent tendency in neural networks engaged in continual learning tasks, wherein intermediate task similarity results in the highest levels of catastrophic interference. This phenomenon is attributed to the network's tendency to reuse learned features across tasks. However, this explanation heavily relies on the premise that neuron specialisation occurs, i.e. the emergence of localised representations. Our investigation challenges the validity of this assumption. Using theoretical frameworks for the analysis of neural networks, we show a strong dependence of specialisation on the initial condition. More precisely, we show that weight imbalance and high weight entropy can favour specialised solutions. We then apply these insights in the context of continual learning, first showing the emergence of a monotonic relation between task-similarity and forgetting in non-specialised networks. Finally, we show that specialization by weight imbalance is beneficial on the commonly employed elastic weight consolidation regularisation technique.
Cite
Text
Jarvis et al. "A Theory of Initialisation's Impact on Specialisation." International Conference on Learning Representations, 2025.Markdown
[Jarvis et al. "A Theory of Initialisation's Impact on Specialisation." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/jarvis2025iclr-theory/)BibTeX
@inproceedings{jarvis2025iclr-theory,
title = {{A Theory of Initialisation's Impact on Specialisation}},
author = {Jarvis, Devon and Lee, Sebastian and Dominé, Clémentine Carla Juliette and Saxe, Andrew M and Mannelli, Stefano Sarao},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/jarvis2025iclr-theory/}
}