Unit-Level Surprise in Neural Networks
Abstract
To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.
Cite
Text
Eastwood et al. "Unit-Level Surprise in Neural Networks." NeurIPS 2021 Workshops: ICBINB, 2021.Markdown
[Eastwood et al. "Unit-Level Surprise in Neural Networks." NeurIPS 2021 Workshops: ICBINB, 2021.](https://mlanthology.org/neuripsw/2021/eastwood2021neuripsw-unitlevel/)BibTeX
@inproceedings{eastwood2021neuripsw-unitlevel,
title = {{Unit-Level Surprise in Neural Networks}},
author = {Eastwood, Cian and Mason, Ian and Williams, Chris},
booktitle = {NeurIPS 2021 Workshops: ICBINB},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/eastwood2021neuripsw-unitlevel/}
}