Towards Real-Time Segmentation on the Edge
Abstract
The research in real-time segmentation mainly focuses on desktop GPUs. However, autonomous driving and many other applications rely on real-time segmentation on the edge, and current arts are far from the goal. In addition, recent advances in vision transformers also inspire us to re-design the network architecture for dense prediction task. In this work, we propose to combine the self attention block with lightweight convolutions to form new building blocks, and employ latency constraints to search an efficient sub-network. We train an MLP latency model based on generated architecture configurations and their latency measured on mobile devices, so that we can predict the latency of subnets during search phase. To the best of our knowledge, we are the first to achieve over 74% mIoU on Cityscapes with semi-real-time inference (over 15 FPS) on mobile GPU from an off-the-shelf phone.
Cite
Text
Li et al. "Towards Real-Time Segmentation on the Edge." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25232Markdown
[Li et al. "Towards Real-Time Segmentation on the Edge." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/li2023aaai-real/) doi:10.1609/AAAI.V37I2.25232BibTeX
@inproceedings{li2023aaai-real,
title = {{Towards Real-Time Segmentation on the Edge}},
author = {Li, Yanyu and Yang, Changdi and Zhao, Pu and Yuan, Geng and Niu, Wei and Guan, Jiexiong and Tang, Hao and Qin, Minghai and Jin, Qing and Ren, Bin and Lin, Xue and Wang, Yanzhi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {1468-1476},
doi = {10.1609/AAAI.V37I2.25232},
url = {https://mlanthology.org/aaai/2023/li2023aaai-real/}
}