Random Function Priors for Correlation Modeling
Abstract
The likelihood model of high dimensional data $X_n$ can often be expressed as $p(X_n|Z_n,\theta)$, where $\theta\mathrel{\mathop:}=(\theta_k)_{k\in[K]}$ is a collection of hidden features shared across objects, indexed by $n$, and $Z_n$ is a non-negative factor loading vector with $K$ entries where $Z_{nk}$ indicates the strength of $\theta_k$ used to express $X_n$. In this paper, we introduce random function priors for $Z_n$ for modeling correlations among its $K$ dimensions $Z_{n1}$ through $Z_{nK}$, which we call population random measure embedding (PRME). Our model can be viewed as a generalized paintbox model \cite{Broderick13} using random functions, and can be learned efficiently with neural networks via amortized variational inference. We derive our Bayesian nonparametric method by applying a representation theorem on separately exchangeable discrete random measures.
Cite
Text
Zhang and Paisley. "Random Function Priors for Correlation Modeling." International Conference on Machine Learning, 2019.Markdown
[Zhang and Paisley. "Random Function Priors for Correlation Modeling." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/zhang2019icml-random/)BibTeX
@inproceedings{zhang2019icml-random,
title = {{Random Function Priors for Correlation Modeling}},
author = {Zhang, Aonan and Paisley, John},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {7424-7433},
volume = {97},
url = {https://mlanthology.org/icml/2019/zhang2019icml-random/}
}