Towards an Inductive Bias for Quantum Statistics in GANs

Abstract

Machine learning models that leverage a latent space with a structure similar to the underlying data distribution have been shown to be highly successful. However, when the data is produced by a quantum process, classical computers are expected to struggle to generate a matching latent space. Here, we show that using a quantum processor to produce the latent space used by a generator in a generative adversarial network (GAN) leads to improved performance on a small-scale quantum dataset. We also demonstrate that this approach is scalable to large-scale data. These results constitute a promising first step towards building real-world generative models with an inductive bias for data with quantum statistics.

Cite

Text

Wallner and Clements. "Towards an Inductive Bias for Quantum Statistics in GANs." ICLR 2023 Workshops: Physics4ML, 2023.

Markdown

[Wallner and Clements. "Towards an Inductive Bias for Quantum Statistics in GANs." ICLR 2023 Workshops: Physics4ML, 2023.](https://mlanthology.org/iclrw/2023/wallner2023iclrw-inductive/)

BibTeX

@inproceedings{wallner2023iclrw-inductive,
  title     = {{Towards an Inductive Bias for Quantum Statistics in GANs}},
  author    = {Wallner, Hugo and Clements, William R},
  booktitle = {ICLR 2023 Workshops: Physics4ML},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/wallner2023iclrw-inductive/}
}