Eye Semantic Segmentation with a Lightweight Model
Abstract
In this paper, we present a multi-class eye segmentation method that can run the hardware limitations for real-time inference. Our approach includes three major stages: get a grayscale image from the input, segment three distinct eye region with a deep network, and remove incorrect areas with heuristic filters. Our model based on the encoder-decoder structure with the key is the depthwise convolution operation to reduce the computation cost. We experiment on OpenEDS, a large scale dataset of eye images captured by a head-mounted display with two synchronized eye facing cameras. We achieved the mean intersection over union (mIoU) of 94.85% with a model of size 0.4 megabytes.
Cite
Text
Huynh et al. "Eye Semantic Segmentation with a Lightweight Model." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00457Markdown
[Huynh et al. "Eye Semantic Segmentation with a Lightweight Model." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/huynh2019iccvw-eye/) doi:10.1109/ICCVW.2019.00457BibTeX
@inproceedings{huynh2019iccvw-eye,
title = {{Eye Semantic Segmentation with a Lightweight Model}},
author = {Huynh, Van Thong and Kim, Soo-Hyung and Lee, Gueesang and Yang, Hyung-Jeong},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {3694-3697},
doi = {10.1109/ICCVW.2019.00457},
url = {https://mlanthology.org/iccvw/2019/huynh2019iccvw-eye/}
}