Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains
Abstract
Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.
Cite
Text
Dong et al. "Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains." Uncertainty in Artificial Intelligence, 2024.Markdown
[Dong et al. "Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains." Uncertainty in Artificial Intelligence, 2024.](https://mlanthology.org/uai/2024/dong2024uai-learning/)BibTeX
@inproceedings{dong2024uai-learning,
title = {{Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains}},
author = {Dong, Hailiang and Amato, James and Gogate, Vibhav and Ruozzi, Nicholas},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2024},
pages = {1176-1188},
volume = {244},
url = {https://mlanthology.org/uai/2024/dong2024uai-learning/}
}