Backdoor DNFs

Abstract

We introduce backdoor DNFs, as a tool to measure the theoretical hardness of CNF formulas. Like backdoor sets and backdoor trees, backdoor DNFs are defined relative to a tractable class of CNF formulas. Each conjunctive term of a backdoor DNF defines a partial assignment that moves the input CNF formula into the base class. Backdoor DNFs are more expressive and potentially smaller than their predecessors backdoor sets and backdoor trees. We establish the fixed-parameter tractability of the backdoor DNF detection problem. Our results hold for the fundamental base classes Horn and 2CNF, and their combination. We complement our theoretical findings by an empirical study. Our experiments show that backdoor DNFs provide a significant improvement over their predecessors.

Cite

Text

Ordyniak et al. "Backdoor DNFs." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/194

Markdown

[Ordyniak et al. "Backdoor DNFs." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/ordyniak2021ijcai-backdoor/) doi:10.24963/IJCAI.2021/194

BibTeX

@inproceedings{ordyniak2021ijcai-backdoor,
  title     = {{Backdoor DNFs}},
  author    = {Ordyniak, Sebastian and Schidler, André and Szeider, Stefan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {1403-1409},
  doi       = {10.24963/IJCAI.2021/194},
  url       = {https://mlanthology.org/ijcai/2021/ordyniak2021ijcai-backdoor/}
}