Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion

Abstract

Estimating the shape of an elliptical distribution is a fundamental problem in statistics. One estimator for the shape matrix, Tyler’s M-estimator, has been shown to have many appealing asymptotic properties. It performs well in numerical experiments and can be quickly computed in practice by a simple iterative procedure. Despite the many years the estimator has been studied in the statistics community, there was neither a non-asymptotic bound on the rate of the estimator nor a proof that the iterative procedure converges in polynomially many steps. Here we observe a surprising connection between Tyler’s M-estimator and operator scaling, which has been intensively studied in recent years in part because of its connections to the Brascamp-Lieb inequality in analysis. We use this connection, together with novel results on quantum expanders, to show that Tyler’s M-estimator has the optimal rate up to factors logarithmic in the dimension, and that in the generative model the iterative procedure has a linear convergence rate even without regularization.

Cite

Text

Franks and Moitra. "Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion." Conference on Learning Theory, 2020.

Markdown

[Franks and Moitra. "Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/franks2020colt-rigorous/)

BibTeX

@inproceedings{franks2020colt-rigorous,
  title     = {{Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion}},
  author    = {Franks, William Cole and Moitra, Ankur},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {1601-1632},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/franks2020colt-rigorous/}
}