SA-DQAS: Self-Attention Enhanced Differentiable Quantum Architecture Search

Abstract

We introduce SA-DQAS in this paper, a novel framework that enhances the gradient-based Differentiable Quantum Architecture Search (DQAS) with a self-attention mechanism, aimed at optimizing circuit design for Quantum Machine Learning (QML) challenges. Analogous to a sequence of words in a sentence, a quantum circuit can be viewed as a sequence of placeholders containing quantum gates. Unlike DQAS, each placeholder is independent, while the self-attention mechanism in SA-DQAS helps to capture relation and dependency information among each operation candidate placed on placeholders in a circuit. To evaluate and verify, we conduct experiments on job-shop scheduling problems (JSSP), Max-cut problems, quantum chemistry and quantum fidelity. Incorporating self-attention improves the stability and performance of the resulting quantum circuits and refines their structural design with higher noise resilience and fidelity. Our research demonstrates the first successful integration of self-attention with DQAS.

Cite

Text

Sun et al. "SA-DQAS: Self-Attention Enhanced Differentiable Quantum Architecture Search." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.

Markdown

[Sun et al. "SA-DQAS: Self-Attention Enhanced Differentiable Quantum Architecture Search." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/sun2024icmlw-sadqas/)

BibTeX

@inproceedings{sun2024icmlw-sadqas,
  title     = {{SA-DQAS: Self-Attention Enhanced Differentiable Quantum Architecture Search}},
  author    = {Sun, Yize and Liu, Jiarui and Wu, Zixin and Ding, Zifeng and Ma, Yunpu and Seidl, Thomas and Tresp, Volker},
  booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/sun2024icmlw-sadqas/}
}