Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization
Abstract
When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy. In this work, we leverage recent methodological advances in Bayesian optimization over high-dimensional search spaces and multi-objective Bayesian optimization to efficiently explore these trade-offs for a production-scale on-device natural language understanding model at Facebook.
Cite
Text
Eriksson et al. "Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization." ICML 2021 Workshops: AutoML, 2021.Markdown
[Eriksson et al. "Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization." ICML 2021 Workshops: AutoML, 2021.](https://mlanthology.org/icmlw/2021/eriksson2021icmlw-latencyaware/)BibTeX
@inproceedings{eriksson2021icmlw-latencyaware,
title = {{Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization}},
author = {Eriksson, David and Chuang, Pierce I-Jen and Daulton, Samuel and Xia, Peng and Shrivastava, Akshat and Babu, Arun and Zhao, Shicong and Aly, Ahmed A and Venkatesh, Ganesh and Balandat, Maximilian},
booktitle = {ICML 2021 Workshops: AutoML},
year = {2021},
url = {https://mlanthology.org/icmlw/2021/eriksson2021icmlw-latencyaware/}
}