Gradient Origin Networks
Abstract
This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.
Cite
Text
Bond-Taylor and Willcocks. "Gradient Origin Networks." International Conference on Learning Representations, 2021.Markdown
[Bond-Taylor and Willcocks. "Gradient Origin Networks." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/bondtaylor2021iclr-gradient/)BibTeX
@inproceedings{bondtaylor2021iclr-gradient,
title = {{Gradient Origin Networks}},
author = {Bond-Taylor, Sam and Willcocks, Chris G.},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/bondtaylor2021iclr-gradient/}
}