Principal Eigenstate Classical Shadows
Abstract
Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi⟩$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi⟩$ which can later be used to estimate expectation values $⟨\phi |O | \phi ⟩$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal—matching the sample complexity for pure state classical shadows.
Cite
Text
Grier et al. "Principal Eigenstate Classical Shadows." Conference on Learning Theory, 2024.Markdown
[Grier et al. "Principal Eigenstate Classical Shadows." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/grier2024colt-principal/)BibTeX
@inproceedings{grier2024colt-principal,
title = {{Principal Eigenstate Classical Shadows}},
author = {Grier, Daniel and Pashayan, Hakop and Schaeffer, Luke},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {2122-2165},
volume = {247},
url = {https://mlanthology.org/colt/2024/grier2024colt-principal/}
}