Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction

Abstract

Self-Supervised Learning (SSL) has been shown to learn useful and information-preserving representations. Neural Networks (NNs) are widely applied, yet their weight space is still not fully understood. Therefore, we propose to use SSL to learn hyper-representations of the weights of populations of NNs. To that end, we introduce domain specific data augmentations and an adapted attention architecture. Our empirical evaluation demonstrates that self-supervised representation learning in this domain is able to recover diverse NN model characteristics. Further, we show that the proposed learned representations outperform prior work for predicting hyper-parameters, test accuracy, and generalization gap as well as transfer to out-of-distribution settings.

Cite

Text

Schürholt et al. "Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction." Neural Information Processing Systems, 2021.

Markdown

[Schürholt et al. "Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/schurholt2021neurips-selfsupervised/)

BibTeX

@inproceedings{schurholt2021neurips-selfsupervised,
  title     = {{Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction}},
  author    = {Schürholt, Konstantin and Kostadinov, Dimche and Borth, Damian},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/schurholt2021neurips-selfsupervised/}
}