Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search

Abstract

In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling, which enables a more effective RL-based search algorithm by targeting the potential global optimal architecture. To improve efficiency, we exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies. Evaluation on two standard benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed method is able to discover highly competitive architectures for generally better image generation results with a considerably reduced computational burden: 7 GPU hours. Our code is available at https://github.com/Yuantian013/E2GAN.

Cite

Text

Tian et al. "Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58571-6_11

Markdown

[Tian et al. "Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/tian2020eccv-offpolicy/) doi:10.1007/978-3-030-58571-6_11

BibTeX

@inproceedings{tian2020eccv-offpolicy,
  title     = {{Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search}},
  author    = {Tian, Yuan and Wang, Qin and Huang, Zhiwu and Li, Wen and Dai, Dengxin and Yang, Minghao and Wang, Jun and Fink, Olga},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58571-6_11},
  url       = {https://mlanthology.org/eccv/2020/tian2020eccv-offpolicy/}
}