Intelligent Scene Caching to Improve Accuracy for Energy-Constrained Embedded Vision

Abstract

We describe an efficient method of improving the performance of vision algorithms operating on video streams by reducing the amount of data captured and transferred from image sensors to analysis servers in a data-aware manner. The key concept is to combine guided, highly heterogeneous sampling with an intelligent Scene Cache. This enables the system to adapt to spatial and temporal patterns in the scene, thus reducing redundant data capture and processing. A software prototype of our framework running on a general-purpose embedded processor enables superior object detection accuracy (by 56%) at similar energy consumption (slight improvement of 4%) compared to an H.264 hardware accelerator.

Cite

Text

Simpson et al. "Intelligent Scene Caching to Improve Accuracy for Energy-Constrained Embedded Vision." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00370

Markdown

[Simpson et al. "Intelligent Scene Caching to Improve Accuracy for Energy-Constrained Embedded Vision." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/simpson2020cvprw-intelligent/) doi:10.1109/CVPRW50498.2020.00370

BibTeX

@inproceedings{simpson2020cvprw-intelligent,
  title     = {{Intelligent Scene Caching to Improve Accuracy for Energy-Constrained Embedded Vision}},
  author    = {Simpson, Benjamin and Lubana, Ekdeep Singh and Liu, Yuchen and Dick, Robert P.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {3114-3122},
  doi       = {10.1109/CVPRW50498.2020.00370},
  url       = {https://mlanthology.org/cvprw/2020/simpson2020cvprw-intelligent/}
}