CircuitVAE: Efficient and Scalable Latent Circuit Optimization

Abstract

Automatically designing fast and space-efficient digital circuits is challenging because circuits are discrete, must exactly implement the desired logic, and are costly to simulate. We address these challenges with CircuitVAE, a search algorithm that embeds computation graphs in a continuous space and optimizes a learned surrogate of physical simulation by gradient descent. By carefully controlling overfitting of the simulation surrogate and ensuring diverse exploration, our algorithm is highly sample-efficient, yet gracefully scales to large problem instances and high sample budgets. We test CircuitVAE by designing binary adders across a large range of sizes, IO timing constraints, and sample budgets. Our method excels at designing large circuits, where other algorithms struggle: compared to reinforcement learning and genetic algorithms, CircuitVAE typically finds 64-bit adders which are smaller and faster using less than half the sample budget. We also find CircuitVAE can design state-of-the-art adders in a real-world chip, demonstrating that our method can outperform commercial tools in a realistic setting.

Cite

Text

Song et al. "CircuitVAE: Efficient and Scalable Latent Circuit Optimization." NeurIPS 2023 Workshops: ReALML, 2023.

Markdown

[Song et al. "CircuitVAE: Efficient and Scalable Latent Circuit Optimization." NeurIPS 2023 Workshops: ReALML, 2023.](https://mlanthology.org/neuripsw/2023/song2023neuripsw-circuitvae/)

BibTeX

@inproceedings{song2023neuripsw-circuitvae,
  title     = {{CircuitVAE: Efficient and Scalable Latent Circuit Optimization}},
  author    = {Song, Jialin and Swope, Aidan and Kirby, Robert and Roy, Rajarshi and Godil, Saad and Raiman, Jonathan and Catanzaro, Bryan},
  booktitle = {NeurIPS 2023 Workshops: ReALML},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/song2023neuripsw-circuitvae/}
}