Stabilizing the Kumaraswamy Distribution

Abstract

Large-scale latent variable models require expressive continuous distributions that support efficient sampling and low-variance differentiation, achievable through the reparameterization trick. The Kumaraswamy (KS) distribution is both expressive and supports the reparameterization trick with a simple closed-form inverse CDF. Yet, its adoption remains limited. We identify and resolve numerical instabilities in the log-pdf, CDF, and inverse CDF, exposing issues in libraries like PyTorch and TensorFlow. We then introduce simple and scalable latent variable models to address exploration-exploitation trade-offs in contextual multi-armed bandits and facilitate uncertainty quantification for link prediction with graph neural networks. We find these models to be most performant when paired with the stable KS. Our results support the stabilized KS distribution as a core component in scalable variational models for bounded latent variables.

Cite

Text

Wasserman and Mateos. "Stabilizing the Kumaraswamy Distribution." Transactions on Machine Learning Research, 2025.

Markdown

[Wasserman and Mateos. "Stabilizing the Kumaraswamy Distribution." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/wasserman2025tmlr-stabilizing/)

BibTeX

@article{wasserman2025tmlr-stabilizing,
  title     = {{Stabilizing the Kumaraswamy Distribution}},
  author    = {Wasserman, Max and Mateos, Gonzalo},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/wasserman2025tmlr-stabilizing/}
}