SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
Abstract
We design deep neural networks (DNNs) and corresponding networks' splittings to distribute DNNs' workload to camera sensors and a centralized aggregator on head-mounted devices to meet system performance targets in inference accuracy and latency under the given hardware resource constraints. To achieve an optimal balance among computation, communication, and performance, a split-aware neural architecture search framework, SplitNets, is introduced to conduct model designing, splitting, and communication reduction simultaneously. We further extend the framework to multi-view systems for learning to fuse inputs from multiple camera sensors with optimal performance and systemic efficiency. We validate SplitNets for single-view system on ImageNet as well as multi-view system on 3D classification, and show that the SplitNets framework achieves state-of-the-art (SOTA) performance and system latency compared with existing approaches.
Cite
Text
Dong et al. "SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01223Markdown
[Dong et al. "SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/dong2022cvpr-splitnets/) doi:10.1109/CVPR52688.2022.01223BibTeX
@inproceedings{dong2022cvpr-splitnets,
title = {{SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems}},
author = {Dong, Xin and De Salvo, Barbara and Li, Meng and Liu, Chiao and Qu, Zhongnan and Kung, H.T. and Li, Ziyun},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {12559-12569},
doi = {10.1109/CVPR52688.2022.01223},
url = {https://mlanthology.org/cvpr/2022/dong2022cvpr-splitnets/}
}