Parametrized Quantum Policies for Reinforcement Learning

Abstract

With the advent of real-world quantum computing, the idea that parametrized quantum computations can be used as hypothesis families in a quantum-classical machine learning system is gaining increasing traction. Such hybrid systems have already shown the potential to tackle real-world tasks in supervised and generative learning, and recent works have established their provable advantages in special artificial tasks. Yet, in the case of reinforcement learning, which is arguably most challenging and where learning boosts would be extremely valuable, no proposal has been successful in solving even standard benchmarking tasks, nor in showing a theoretical learning advantage over classical algorithms. In this work, we achieve both. We propose a hybrid quantum-classical reinforcement learning model using very few qubits, which we show can be effectively trained to solve several standard benchmarking environments. Moreover, we demonstrate, and formally prove, the ability of parametrized quantum circuits to solve certain learning tasks that are intractable to classical models, including current state-of-art deep neural networks, under the widely-believed classical hardness of the discrete logarithm problem.

Cite

Text

Jerbi et al. "Parametrized Quantum Policies for Reinforcement Learning." Neural Information Processing Systems, 2021.

Markdown

[Jerbi et al. "Parametrized Quantum Policies for Reinforcement Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/jerbi2021neurips-parametrized/)

BibTeX

@inproceedings{jerbi2021neurips-parametrized,
  title     = {{Parametrized Quantum Policies for Reinforcement Learning}},
  author    = {Jerbi, Sofiene and Gyurik, Casper and Marshall, Simon and Briegel, Hans and Dunjko, Vedran},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/jerbi2021neurips-parametrized/}
}