Adaptive Convergence Rates for Log-Concave Maximum Likelihood
Abstract
We study the task of estimating a log-concave density in $\mathbb{R}^d$ using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every $d \geq 4$, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of $k$ affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order $c(k) \cdot n^{-\frac{4}{d}}$ in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases $d = 1$ and $d \in \{2,3\}$, respectively. Our proof provides a unified and relatively simple approach for all $d \geq 1$, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.
Cite
Text
Kur and Guntuboyina. "Adaptive Convergence Rates for Log-Concave Maximum Likelihood." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Kur and Guntuboyina. "Adaptive Convergence Rates for Log-Concave Maximum Likelihood." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/kur2025aistats-adaptive/)BibTeX
@inproceedings{kur2025aistats-adaptive,
title = {{Adaptive Convergence Rates for Log-Concave Maximum Likelihood}},
author = {Kur, Gil and Guntuboyina, Aditya},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {1450-1458},
volume = {258},
url = {https://mlanthology.org/aistats/2025/kur2025aistats-adaptive/}
}