On the Trainability and Classical Simulability of Learning Matrix Product States Variationally
Abstract
We prove that using global observables to train the matrix product state ansatz results in the vanishing of all partial derivatives, also known as barren plateaus, while using local observables avoids this. This ansatz is widely used in quantum machine learning for learning weakly entangled state approximations. Additionally, we empirically demonstrate that in many cases, the objective function is an inner product of almost sparse operators, highlighting the potential for classically simulating such a learning problem with few quantum resources. All our results are experimentally validated across various scenarios.
Cite
Text
Basheer et al. "On the Trainability and Classical Simulability of Learning Matrix Product States Variationally." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I15.33701Markdown
[Basheer et al. "On the Trainability and Classical Simulability of Learning Matrix Product States Variationally." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/basheer2025aaai-trainability/) doi:10.1609/AAAI.V39I15.33701BibTeX
@inproceedings{basheer2025aaai-trainability,
title = {{On the Trainability and Classical Simulability of Learning Matrix Product States Variationally}},
author = {Basheer, Afrad and Feng, Yuan and Ferrie, Christopher and Li, Sanjiang and Pashayan, Hakop},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {15498-15506},
doi = {10.1609/AAAI.V39I15.33701},
url = {https://mlanthology.org/aaai/2025/basheer2025aaai-trainability/}
}