An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)

Abstract

Deep learning (DL), despite its success in various fields, remains expensive and inaccessible to many due to its need for powerful supercomputing and high-end GPUs. This study explores alternative computing infrastructure and methods for distributed DL on low-energy, low-cost devices. We experiment on Raspberry Pi 4 devices with ARM Cortex-A72 processors and train a ResNet-18 model on the CIFAR-10 dataset. Our findings reveal limitations and opportunities for future optimizations, paving the way for a DL toolset for low-energy edge devices.

Cite

Text

Mwase et al. "An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30485

Markdown

[Mwase et al. "An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/mwase2024aaai-empirical/) doi:10.1609/AAAI.V38I21.30485

BibTeX

@inproceedings{mwase2024aaai-empirical,
  title     = {{An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)}},
  author    = {Mwase, Christine and Kahira, Albert Njoroge and Zou, Zhuo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23590-23591},
  doi       = {10.1609/AAAI.V38I21.30485},
  url       = {https://mlanthology.org/aaai/2024/mwase2024aaai-empirical/}
}