Active Structure Learning of Bayesian Networks in an Observational Setting
Abstract
We study active structure learning of Bayesian networks in an observational setting, in which there are external limitations on the number of variable values that can be observed from the same sample. Random samples are drawn from the joint distribution of the network variables, and the algorithm iteratively selects which variables to observe in the next sample. We propose a new active learning algorithm for this setting, that finds with a high probability a structure with a score that is $\epsilon$-close to the optimal score. We show that for a class of distributions that we term stable, a sample complexity reduction of up to a factor of $\widetilde{\Omega}(d^3)$ can be obtained, where $d$ is the number of network variables. We further show that in the worst case, the sample complexity of the active algorithm is guaranteed to be almost the same as that of a naive baseline algorithm. To supplement the theoretical results, we report experiments that compare the performance of the new active algorithm to the naive baseline and demonstrate the sample complexity improvements. Code for the algorithm and for the experiments is provided at https://github.com/noabdavid/activeBNSL.
Cite
Text
Ben-David and Sabato. "Active Structure Learning of Bayesian Networks in an Observational Setting." Journal of Machine Learning Research, 2022.Markdown
[Ben-David and Sabato. "Active Structure Learning of Bayesian Networks in an Observational Setting." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/bendavid2022jmlr-active/)BibTeX
@article{bendavid2022jmlr-active,
title = {{Active Structure Learning of Bayesian Networks in an Observational Setting}},
author = {Ben-David, Noa and Sabato, Sivan},
journal = {Journal of Machine Learning Research},
year = {2022},
pages = {1-38},
volume = {23},
url = {https://mlanthology.org/jmlr/2022/bendavid2022jmlr-active/}
}