Product Distribution Learning with Imperfect Advice
Abstract
Given i.i.d.~samples from an unknown distribution $P$, the goal of distribution learning is to recover the parameters of a distribution that is close to $P$. When $P$ belongs to the class of product distributions on the Boolean hypercube $\{0,1\}^d$, it is known that $\Omega(d/\epsilon^2)$ samples are necessary to learn $P$ within total variation (TV) distance $\epsilon$. We revisit this problem when the learner is also given as advice the parameters of a product distribution $Q$. We show that there is an efficient algorithm to learn $P$ within TV distance $\epsilon$ that has sample complexity $\tilde{O}(d^{1-\eta}/\epsilon^2)$, if $\|\mathbf{p} - \mathbf{q}\|_1<\epsilon d^{0.5 - \Omega(\eta)}$. Here, $\mathbf{p}$ and $\mathbf{q}$ are the mean vectors of $P$ and $Q$ respectively, and no bound on $\|\mathbf{p} - \mathbf{q}\|_1$ is known to the algorithm a priori.
Cite
Text
Bhattacharyya et al. "Product Distribution Learning with Imperfect Advice." Advances in Neural Information Processing Systems, 2025.Markdown
[Bhattacharyya et al. "Product Distribution Learning with Imperfect Advice." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/bhattacharyya2025neurips-product/)BibTeX
@inproceedings{bhattacharyya2025neurips-product,
title = {{Product Distribution Learning with Imperfect Advice}},
author = {Bhattacharyya, Arnab and Choo, Davin and John, Philips George and Gouleakis, Themis},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/bhattacharyya2025neurips-product/}
}