Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search

Abstract

Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware. However, recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks. This violates the main assumption of weight-sharing NAS algorithms, thus limiting their effectiveness. We tackle this issue by proposing a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures using a small set of landmark architectures. We incorporate our regularization term into three different NAS algorithms and show that it consistently improves performance across algorithms, search-spaces, and tasks.

Cite

Text

Yu et al. "Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01351

Markdown

[Yu et al. "Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/yu2021cvpr-landmark/) doi:10.1109/CVPR46437.2021.01351

BibTeX

@inproceedings{yu2021cvpr-landmark,
  title     = {{Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search}},
  author    = {Yu, Kaicheng and Ranftl, Rene and Salzmann, Mathieu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {13723-13732},
  doi       = {10.1109/CVPR46437.2021.01351},
  url       = {https://mlanthology.org/cvpr/2021/yu2021cvpr-landmark/}
}