Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report

Abstract

Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 m. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.

Cite

Text

Ignatov et al. "Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25066-8_4

Markdown

[Ignatov et al. "Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/ignatov2022eccvw-efficient/) doi:10.1007/978-3-031-25066-8_4

BibTeX

@inproceedings{ignatov2022eccvw-efficient,
  title     = {{Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report}},
  author    = {Ignatov, Andrey and Malivenko, Grigory and Timofte, Radu and Treszczotko, Lukasz and Chang, Xin and Ksiazek, Piotr and Lopuszynski, Michal and Pioro, Maciej and Rudnicki, Rafal and Smyl, Maciej and Ma, Yujie and Li, Zhenyu and Chen, Zehui and Xu, Jialei and Liu, Xianming and Jiang, Junjun and Shi, XueChao and Xu, Difan and Li, Yanan and Wang, Xiaotao and Lei, Lei and Zhang, Ziyu and Wang, Yicheng and Huang, Zilong and Luo, Guozhong and Yu, Gang and Fu, Bin and Li, Jiaqi and Wang, Yiran and Huang, Zihao and Cao, Zhiguo and Conde, Marcos V. and Sapozhnikov, Denis and Lee, Byeong Hyun and Park, Dongwon and Hong, Seongmin and Lee, Joonhee and Lee, Seunggyu and Chun, Se Young},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {71-91},
  doi       = {10.1007/978-3-031-25066-8_4},
  url       = {https://mlanthology.org/eccvw/2022/ignatov2022eccvw-efficient/}
}