Folded Hamiltonian Monte Carlo for Bayesian Generative Adversarial Networks
Abstract
Probabilistic modelling on Generative Adversarial Networks (GANs) within the Bayesian framework has shown success in estimating the complex distribution in literature. In this paper, we develop a Bayesian formulation for unsupervised and semi-supervised GAN learning. Specifically, we propose Folded Hamiltonian Monte Carlo (F-HMC) methods within this framework to learn the distributions over the parameters of the generators and discriminators. We show that the F-HMC efficiently approximates multi-modal and high dimensional data when combined with Bayesian GANs. Its composition improves run time and test error in generating diverse samples. Experimental results with high-dimensional synthetic multi-modal data and natural image benchmarks, including CIFAR-10, SVHN and ImageNet, show that F-HMC outperforms the state-of-the-art methods in terms of test error, run times per epoch, inception score and Frechet Inception Distance scores.
Cite
Text
Pourshahrokhi et al. "Folded Hamiltonian Monte Carlo for Bayesian Generative Adversarial Networks." Proceedings of the 15th Asian Conference on Machine Learning, 2023.Markdown
[Pourshahrokhi et al. "Folded Hamiltonian Monte Carlo for Bayesian Generative Adversarial Networks." Proceedings of the 15th Asian Conference on Machine Learning, 2023.](https://mlanthology.org/acml/2023/pourshahrokhi2023acml-folded/)BibTeX
@inproceedings{pourshahrokhi2023acml-folded,
title = {{Folded Hamiltonian Monte Carlo for Bayesian Generative Adversarial Networks}},
author = {Pourshahrokhi, Narges and Li, Yunpeng and Kouchaki, Samaneh and Barnaghi, Payam},
booktitle = {Proceedings of the 15th Asian Conference on Machine Learning},
year = {2023},
pages = {1103-1118},
volume = {222},
url = {https://mlanthology.org/acml/2023/pourshahrokhi2023acml-folded/}
}