Hyper-Representation for Pre-Training and Transfer Learning

Abstract

Learning representations of neural network weights given a model zoo is an emerging and challenging area with many potential applications from model inspection, to neural architecture search or knowledge distillation. Recently, an autoencoder trained on a model zoo was able to learn a hyper-representation, which captures intrinsic and extrinsic properties of the models in the zoo. In this work, we extend hyper-representations for generative use to sample new model weights as pre-training. We propose layer-wise loss normalization which we demonstrate is key to generate high-performing models and a sampling method based on the empirical density of hyper-representations. The models generated using our methods are diverse, performant and capable to outperform conventional baselines for transfer learning. Our results indicate the potential of knowledge aggregation from model zoos to new models via hyper-representations thereby paving the avenue for novel research directions.

Cite

Text

Schürholt et al. "Hyper-Representation for Pre-Training and Transfer Learning." ICML 2022 Workshops: Pre-Training, 2022.

Markdown

[Schürholt et al. "Hyper-Representation for Pre-Training and Transfer Learning." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/schurholt2022icmlw-hyperrepresentation/)

BibTeX

@inproceedings{schurholt2022icmlw-hyperrepresentation,
  title     = {{Hyper-Representation for Pre-Training and Transfer Learning}},
  author    = {Schürholt, Konstantin and Knyazev, Boris and Giró-i-Nieto, Xavier and Borth, Damian},
  booktitle = {ICML 2022 Workshops: Pre-Training},
  year      = {2022},
  url       = {https://mlanthology.org/icmlw/2022/schurholt2022icmlw-hyperrepresentation/}
}