Neural Conditional Probability for Uncertainty Quantification
Abstract
We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation capabilities of neural networks, NCP efficiently handles a wide variety of complex probability distributions. We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy. In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods. This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve competitive results, even in the face of more complex architectures.
Cite
Text
Kostic et al. "Neural Conditional Probability for Uncertainty Quantification." Neural Information Processing Systems, 2024. doi:10.52202/079017-1950Markdown
[Kostic et al. "Neural Conditional Probability for Uncertainty Quantification." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/kostic2024neurips-neural/) doi:10.52202/079017-1950BibTeX
@inproceedings{kostic2024neurips-neural,
title = {{Neural Conditional Probability for Uncertainty Quantification}},
author = {Kostic, Vladimir R. and Lounici, Karim and Pacreau, Grégoire and Turri, Giacomo and Novelli, Pietro and Pontil, Massimiliano},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1950},
url = {https://mlanthology.org/neurips/2024/kostic2024neurips-neural/}
}