Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks

Abstract

Low-light image enhancement (LLIE) has a significant role in edge vision applications (EVA). Despite its widespread practicability, the existing LLIE methods are impractical due to their high computational costs. This study proposed a framework to learn optimized low-light image enhancement to tackle the limitations of existing enhancement methods for accelerating EVA. The proposed framework incorporates a lightweight and mobile-friendly deep network. We optimized our proposed model with INT8 precision with a post-training quantization strategy and deployed it on an edge device. The LLIE model has achieved over 199 frames per second (FPS) on a low-power edge board. Additionally, we evaluated the practicability of an optimized model for accelerating the vision application of an edge environment. The experimental results illustrate that our optimized method can significantly accelerate the performance of SOTA vision algorithms in challenging low-light conditions for numerous everyday vision tasks, including object detection and image registration.

Cite

Text

Sharif et al. "Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00639

Markdown

[Sharif et al. "Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/sharif2024cvprw-learning/) doi:10.1109/CVPRW63382.2024.00639

BibTeX

@inproceedings{sharif2024cvprw-learning,
  title     = {{Learning Optimized Low-Light Image Enhancement for Edge Vision Tasks}},
  author    = {Sharif, Sma and Myrzabekov, Azamat and Khujaev, Nodirkhuja and Tsoy, Roman and Kim, Seongwan and Lee, Jaeho},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {6373-6383},
  doi       = {10.1109/CVPRW63382.2024.00639},
  url       = {https://mlanthology.org/cvprw/2024/sharif2024cvprw-learning/}
}