QuadroNet: Multi-Task Learning for Real-Time Semantic Depth Aware Instance Segmentation
Abstract
Vision for autonomous driving is a uniquely challenging problem: the number of tasks required for full scene understanding is large and diverse; the quality requirements on each task are stringent due to the safety-critical nature of the application; and the latency budget is limited, requiring real-time solutions. In this work we address these challenges with QuadroNet, a one-shot network that jointly produces four outputs: 2D detections, instance segmentation, semantic segmentation, and monocular depth estimates in real-time (>60fps) on consumer-grade GPU hardware. On a challenging real-world autonomous driving dataset, we demonstrate an increase of +2.4% mAP for detection, +3.15% mIoU for semantic segmentation, +5.05% [email protected] for instance segmentation and +1.36% in delta<1.25 for depth prediction over a baseline approach. We also compare our work against other multi-task learning approaches on Cityscapes and demonstrate state-of-the-art results.
Cite
Text
Goel et al. "QuadroNet: Multi-Task Learning for Real-Time Semantic Depth Aware Instance Segmentation." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Goel et al. "QuadroNet: Multi-Task Learning for Real-Time Semantic Depth Aware Instance Segmentation." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/goel2021wacv-quadronet/)BibTeX
@inproceedings{goel2021wacv-quadronet,
title = {{QuadroNet: Multi-Task Learning for Real-Time Semantic Depth Aware Instance Segmentation}},
author = {Goel, Kratarth and Srinivasan, Praveen and Tariq, Sarah and Philbin, James},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {315-324},
url = {https://mlanthology.org/wacv/2021/goel2021wacv-quadronet/}
}