Memory-Optimized Once-for-All Network

Abstract

Deploying Deep Neural Networks (DNNs) on different hardware platforms is challenging due to varying resource constraints. Besides handcrafted approaches aiming at making deep models hardware-friendly, Neural Architectures Search is rising as a toolbox to craft more efficient DNNs without sacrificing performance. Among these, the Once-For-All (OFA) approach offers a solution by allowing the sampling of well-performing sub-networks from a single supernet- this leads to evident advantages in terms of computation. However, OFA does not fully utilize the potential memory capacity of the target device, focusing instead on limiting maximum memory usage per layer. This leaves room for an unexploited potential in terms of model generalizability. In this paper, we introduce a M emory- O ptimized OFA ( MOOFA ) supernet, designed to enhance DNN deployment on resource-limited devices by maximizing memory usage (and for instance, features diversity) across different configurations. Tested on ImageNet, our MOOFA supernet demonstrates improvements in memory exploitation and model accuracy compared to the original OFA supernet. Our code is available at https://github.com/MaximeGirard/memory-optimized-once-for-all .

Cite

Text

Girard et al. "Memory-Optimized Once-for-All Network." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91979-4_19

Markdown

[Girard et al. "Memory-Optimized Once-for-All Network." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/girard2024eccvw-memoryoptimized/) doi:10.1007/978-3-031-91979-4_19

BibTeX

@inproceedings{girard2024eccvw-memoryoptimized,
  title     = {{Memory-Optimized Once-for-All Network}},
  author    = {Girard, Maxime and Quétu, Victor and Tardieu, Samuel and Nguyen, Van-Tam and Tartaglione, Enzo},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {252-267},
  doi       = {10.1007/978-3-031-91979-4_19},
  url       = {https://mlanthology.org/eccvw/2024/girard2024eccvw-memoryoptimized/}
}