Reprogramming GANs via Input Noise Design

Abstract

The goal of neural reprogramming is to alter the functionality of a fixed neural network just by preprocessing the input. In this work, we show that Generative Adversarial Networks (GANs) can be reprogrammed by shaping the input noise distribution. One application of our algorithm is to convert an unconditional GAN to a conditional GAN. We also empirically study the applicability, feasibility, and limitation of GAN reprogramming.

Cite

Text

Lee et al. "Reprogramming GANs via Input Noise Design." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020. doi:10.1007/978-3-030-67661-2_16

Markdown

[Lee et al. "Reprogramming GANs via Input Noise Design." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020.](https://mlanthology.org/ecmlpkdd/2020/lee2020ecmlpkdd-reprogramming/) doi:10.1007/978-3-030-67661-2_16

BibTeX

@inproceedings{lee2020ecmlpkdd-reprogramming,
  title     = {{Reprogramming GANs via Input Noise Design}},
  author    = {Lee, Kangwook and Suh, Changho and Ramchandran, Kannan},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2020},
  pages     = {256-271},
  doi       = {10.1007/978-3-030-67661-2_16},
  url       = {https://mlanthology.org/ecmlpkdd/2020/lee2020ecmlpkdd-reprogramming/}
}