Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models
Abstract
It is often stated in papers tackling the task of selecting a Bayesian network structure from data that there are these two distinct approaches: (i) Apply conditional independence tests when testing for the presence or otherwise of edges; (ii) Search the model space using a scoring metric. Here I argue that for complete data and a given node ordering this division is largely a myth, by showing that cross entropy methods for checking conditional independence are mathematically identical to methods based upon discriminating between models by their overall goodness-of-fit logarithmic scores.
Cite
Text
Cowell. "Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models." Conference on Uncertainty in Artificial Intelligence, 2001.Markdown
[Cowell. "Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models." Conference on Uncertainty in Artificial Intelligence, 2001.](https://mlanthology.org/uai/2001/cowell2001uai-conditions/)BibTeX
@inproceedings{cowell2001uai-conditions,
title = {{Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models}},
author = {Cowell, Robert G.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2001},
pages = {91-97},
url = {https://mlanthology.org/uai/2001/cowell2001uai-conditions/}
}