Manas: Multi-Agent Neural Architecture Search
Abstract
The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this work, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements (1/8th of state-of-the-art), and performances above those of much more computationally expensive methods. Theoretically, we demonstrate vanishing regrets of the form ${\mathcal {O}}(\sqrt{T})$ O ( T ) , with T being the total number of rounds. Finally, we perform experiments on CIFAR-10 and ImageNet, and aware that random search and random sampling are (often ignored) effective baselines, we conducted additional experiments on 3 alternative datasets, with complexity constraints, and 2 network configurations, and achieve competitive results in comparison with the baselines and other methods.
Cite
Text
Lopes et al. "Manas: Multi-Agent Neural Architecture Search." Machine Learning, 2024. doi:10.1007/S10994-023-06379-WMarkdown
[Lopes et al. "Manas: Multi-Agent Neural Architecture Search." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/lopes2024mlj-manas/) doi:10.1007/S10994-023-06379-WBibTeX
@article{lopes2024mlj-manas,
title = {{Manas: Multi-Agent Neural Architecture Search}},
author = {Lopes, Vasco and Carlucci, Fabio Maria and Esperança, Pedro M. and Singh, Marco and Yang, Antoine and Gabillon, Victor and Xu, Hang and Chen, Zewei and Wang, Jun},
journal = {Machine Learning},
year = {2024},
pages = {73-96},
doi = {10.1007/S10994-023-06379-W},
volume = {113},
url = {https://mlanthology.org/mlj/2024/lopes2024mlj-manas/}
}