Deep Linear Probe Generators for Weight Space Learning
Abstract
Weight space learning aims to extract information about a neural network, such as its training dataset or generalization error. Recent approaches learn directly from model weights, but this presents many challenges as weights are high-dimensional and include permutation symmetries between neurons. An alternative approach, Probing, represents a model by passing a set of learned inputs (probes) through the model, and training a predictor on top of the corresponding outputs. Although probing is typically not used as a stand alone approach, our preliminary experiment found that a vanilla probing baseline worked surprisingly well. However, we discover that current probe learning strategies are ineffective. We therefore propose Deep Linear Probe Generators (ProbeGen), a simple and effective modification to probing approaches. ProbeGen adds a shared generator module with a deep linear architecture, providing an inductive bias towards structured probes thus reducing overfitting. While simple, ProbeGen performs significantly better than the state-of-the-art and is very efficient, requiring between 30 to 1000 times fewer FLOPs than other top approaches.
Cite
Text
Kahana et al. "Deep Linear Probe Generators for Weight Space Learning." International Conference on Learning Representations, 2025.Markdown
[Kahana et al. "Deep Linear Probe Generators for Weight Space Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/kahana2025iclr-deep/)BibTeX
@inproceedings{kahana2025iclr-deep,
title = {{Deep Linear Probe Generators for Weight Space Learning}},
author = {Kahana, Jonathan and Horwitz, Eliahu and Shuval, Imri and Hoshen, Yedid},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/kahana2025iclr-deep/}
}