Weighted Model Counting with Conditional Weights for Bayesian Networks

Abstract

Weighted model counting (WMC) has emerged as the unifying inference mechanism across many (probabilistic) domains. Encoding an inference problem as an instance of WMC typically necessitates adding extra literals and clauses. This is partly so because the predominant definition of WMC assigns weights to models based on weights on literals, and this severely restricts what probability distributions can be represented. We develop a measure-theoretic perspective on WMC and propose a way to encode conditional weights on literals analogously to conditional probabilities. This representation can be as succinct as standard WMC with weights on literals but can also expand as needed to represent probability distributions with less structure. To demonstrate the performance benefits of conditional weights over the addition of extra literals, we develop a new WMC encoding for Bayesian networks and adapt a state-of-the-art WMC algorithm ADDMC to the new format. Our experiments show that the new encoding significantly improves the performance of the algorithm on most benchmark instances.

Cite

Text

Dilkas and Belle. "Weighted Model Counting with Conditional Weights for Bayesian Networks." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Dilkas and Belle. "Weighted Model Counting with Conditional Weights for Bayesian Networks." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/dilkas2021uai-weighted/)

BibTeX

@inproceedings{dilkas2021uai-weighted,
  title     = {{Weighted Model Counting with Conditional Weights for Bayesian Networks}},
  author    = {Dilkas, Paulius and Belle, Vaishak},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {386-396},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/dilkas2021uai-weighted/}
}