Circuit Transformer: A Transformer That Preserves Logical Equivalence
Abstract
Implementing Boolean functions with circuits consisting of logic gates is fundamental in digital computer design. However, the implemented circuit must be exactly equivalent, which hinders generative neural approaches on this task due to their occasionally wrong predictions. In this study, we introduce a generative neural model, the “Circuit Transformer”, which eliminates such wrong predictions and produces logic circuits strictly equivalent to given Boolean functions. The main idea is a carefully designed decoding mechanism that builds a circuit step-by-step by generating tokens, which has beneficial “cutoff properties” that block a candidate token once it invalidate equivalence. In such a way, the proposed model works similar to typical LLMs while logical equivalence is strictly preserved. A Markov decision process formulation is also proposed for optimizing certain objectives of circuits. Experimentally, we trained an 88-million-parameter Circuit Transformer to generate equivalent yet more compact forms of input circuits, outperforming existing neural approaches on both synthetic and real world benchmarks, without any violation of equivalence constraints. Code: https://github.com/snowkylin/circuit-transformer
Cite
Text
Li et al. "Circuit Transformer: A Transformer That Preserves Logical Equivalence." International Conference on Learning Representations, 2025.Markdown
[Li et al. "Circuit Transformer: A Transformer That Preserves Logical Equivalence." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/li2025iclr-circuit/)BibTeX
@inproceedings{li2025iclr-circuit,
title = {{Circuit Transformer: A Transformer That Preserves Logical Equivalence}},
author = {Li, Xihan and Li, Xing and Chen, Lei and Zhang, Xing and Yuan, Mingxuan and Wang, Jun},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/li2025iclr-circuit/}
}