The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-Training Performance
Abstract
Generalization bounds are valuable both for theory and applications. On the one hand, they shed light on the mechanisms that underpin the learning processes; on the other, they certify how well a learned model performs against unseen inputs. In this work we build upon a recent breakthrough in compression theory to develop a new framework yielding tight generalization bounds of wide practical applicability. The core idea is to embed any given learning algorithm into a suitably-constructed meta-algorithm (here called Pick-to-Learn, P2L) in order to instill desirable compression properties. When applied to the MNIST classification dataset and to a synthetic regression problem, P2L not only attains generalization bounds that compare favorably with the state of the art (test-set and PAC-Bayes bounds), but it also learns models with better post-training performance.
Cite
Text
Paccagnan et al. "The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-Training Performance." Neural Information Processing Systems, 2023.Markdown
[Paccagnan et al. "The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-Training Performance." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/paccagnan2023neurips-picktolearn/)BibTeX
@inproceedings{paccagnan2023neurips-picktolearn,
title = {{The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-Training Performance}},
author = {Paccagnan, Dario and Campi, Marco and Garatti, Simone},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/paccagnan2023neurips-picktolearn/}
}