Compressive Meta-Learning
Abstract
The rapid expansion in the size of new datasets has created a need for fast and efficient parameter-learning techniques. Compressive learning is a framework that enables efficient processing by using random, non-linear features to project large-scale databases onto compact, information-preserving representations whose dimensionality is independent of the number of samples and can be easily stored, transferred, and processed. These database-level summaries are then used to decode parameters of interest from the underlying data distribution without requiring access to the original samples, offering an efficient and privacy-friendly learning framework. However, both the encoding and decoding techniques are typically randomized and data-independent, failing to exploit the underlying structure of the data. In this work, we propose a framework that meta-learns both the encoding and decoding stages of compressive learning methods by using neural networks that provide faster and more accurate systems than the current state-of-the-art approaches. To demonstrate the potential of the presented Compressive Meta-Learning framework, we explore multiple applications -- including neural network-based compressive PCA, compressive ridge regression, compressive k-means, and autoencoders.
Cite
Text
Montserrat et al. "Compressive Meta-Learning." ICLR 2025 Workshops: WSL, 2025. doi:10.1145/3711896.3736889Markdown
[Montserrat et al. "Compressive Meta-Learning." ICLR 2025 Workshops: WSL, 2025.](https://mlanthology.org/iclrw/2025/montserrat2025iclrw-compressive/) doi:10.1145/3711896.3736889BibTeX
@inproceedings{montserrat2025iclrw-compressive,
title = {{Compressive Meta-Learning}},
author = {Montserrat, Daniel Mas and Bonet, David and Perera, Maria and Giró-i-Nieto, Xavier and Ioannidis, Alexander G.},
booktitle = {ICLR 2025 Workshops: WSL},
year = {2025},
doi = {10.1145/3711896.3736889},
url = {https://mlanthology.org/iclrw/2025/montserrat2025iclrw-compressive/}
}