Understanding Generalization in Quantum Machine Learning with Margins

Abstract

Understanding and improving generalization capabilities is crucial for both classical and quantum machine learning (QML). Recent studies have revealed shortcomings in current generalization theories, particularly those relying on uniform bounds, across both classical and quantum settings. In this work, we present a margin-based generalization bound for QML models, providing a more reliable framework for evaluating generalization. Our experimental studies on the quantum phase recognition dataset demonstrate that margin-based metrics are strong predictors of generalization performance, outperforming traditional metrics like parameter count. By connecting this margin-based metric to quantum information theory, we demonstrate how to enhance the generalization performance of QML through a classical-quantum hybrid approach when applied to classical data.

Cite

Text

Hur and Park. "Understanding Generalization in Quantum Machine Learning with Margins." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Hur and Park. "Understanding Generalization in Quantum Machine Learning with Margins." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/hur2025icml-understanding/)

BibTeX

@inproceedings{hur2025icml-understanding,
  title     = {{Understanding Generalization in Quantum Machine Learning with Margins}},
  author    = {Hur, Tak and Park, Daniel K.},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {26338-26360},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/hur2025icml-understanding/}
}