Meta-Learning Sparse Implicit Neural Representations
Abstract
Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example. Being capable of conveying fine details in a high dimensional signal, unboundedly of its domain, implicit neural representations ensure many advantages over conventional discrete representations. However, the current approach is difficult to scale for a large number of signals or a data set, since learning a neural representation---which is parameter heavy by itself---for each signal individually requires a lot of memory and computations. To address this issue, we propose to leverage a meta-learning approach in combination with network compression under a sparsity constraint, such that it renders a well-initialized sparse parameterization that evolves quickly to represent a set of unseen signals in the subsequent training. We empirically demonstrate that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models with the same number of parameters, when trained to fit each signal using the same number of optimization steps.
Cite
Text
Lee et al. "Meta-Learning Sparse Implicit Neural Representations." Neural Information Processing Systems, 2021.Markdown
[Lee et al. "Meta-Learning Sparse Implicit Neural Representations." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/lee2021neurips-metalearning/)BibTeX
@inproceedings{lee2021neurips-metalearning,
title = {{Meta-Learning Sparse Implicit Neural Representations}},
author = {Lee, Jaeho and Tack, Jihoon and Lee, Namhoon and Shin, Jinwoo},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/lee2021neurips-metalearning/}
}