Training a Hopfield Variational Autoencoder with Equilibrium Propagation
Abstract
On dedicated analog hardware, equilibrium propagation is an energy-efficient alternative to backpropagation. In spite of its theoretical guarantees, its application in the AI domain remains limited to the discriminative setting. Meanwhile, despite its high computational demands, generative AI is on the rise. In this paper, we demonstrate the application of Equilibrium Propagation in training a variational autoencoder (VAE) for generative modeling. Leveraging the symmetric nature of Hopfield networks, we propose using a single model to serve as both the encoder and decoder which could effectively halve the required chip size for VAE implementations, paving the way for more efficient analog hardware configurations.
Cite
Text
Van Der Meersch et al. "Training a Hopfield Variational Autoencoder with Equilibrium Propagation." NeurIPS 2023 Workshops: AMHN, 2023.Markdown
[Van Der Meersch et al. "Training a Hopfield Variational Autoencoder with Equilibrium Propagation." NeurIPS 2023 Workshops: AMHN, 2023.](https://mlanthology.org/neuripsw/2023/meersch2023neuripsw-training/)BibTeX
@inproceedings{meersch2023neuripsw-training,
title = {{Training a Hopfield Variational Autoencoder with Equilibrium Propagation}},
author = {Van Der Meersch, Tom and Deleu, Johannes and Demeester, Thomas},
booktitle = {NeurIPS 2023 Workshops: AMHN},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/meersch2023neuripsw-training/}
}