Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network
Abstract
To facilitate implementation of deep neural networks on embedded systems keeping memory and computation requirements low is critical, particularly for real-time mobile use. In this work, we propose a SqueezeNet inspired version of U-Net for image segmentation that achieves a 12X reduction in model size to 32MB, and 3.2X reduction in Multiplication Accumulation operations (MACs) from 287 billion ops to 88 billion ops for inference on the CamVid data set while preserving accuracy. Our proposed Squeeze U-Net is efficient in both low MACs and memory use. Our performance results using Tensorflow 1.14 with Python 3.6 and CUDA 10.1.243 on an NVIDIA K40 GPU shows that Squeeze U-Net is 17% faster for inference and 52% faster for training than U-Net for the same accuracy.
Cite
Text
Beheshti and Johnsson. "Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00190Markdown
[Beheshti and Johnsson. "Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/beheshti2020cvprw-squeeze/) doi:10.1109/CVPRW50498.2020.00190BibTeX
@inproceedings{beheshti2020cvprw-squeeze,
title = {{Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network}},
author = {Beheshti, Nazanin and Johnsson, S. Lennart},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {1495-1504},
doi = {10.1109/CVPRW50498.2020.00190},
url = {https://mlanthology.org/cvprw/2020/beheshti2020cvprw-squeeze/}
}