Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC
Abstract
We demonstrate an event-driven Deep Learning (DL) hardware software ecosystem. The user-friendly software tools port models from Keras (popular machine learning libraries), automaticaly convert DL models to Spiking equivalents, i.e. Spiking Convolutional Neural Networks (SCNNs) and run spiking simulations of the converted models on the hardware emulator for testing and prototyping. More importantly, the software ports the converted models onto a novel, ultra-low power, real-time, event-driven ASIC SCNN Chip: DynapCNN. An interactive demonstration of a real-time face recognition system built using the above pipeline is shown as an example.
Cite
Text
Liu et al. "Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00213Markdown
[Liu et al. "Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/liu2019cvprw-live-a/) doi:10.1109/CVPRW.2019.00213BibTeX
@inproceedings{liu2019cvprw-live-a,
title = {{Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC}},
author = {Liu, Qian and Richter, Ole and Nielsen, Carsten and Sheik, Sadique and Indiveri, Giacomo and Qiao, Ning},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {1680-1681},
doi = {10.1109/CVPRW.2019.00213},
url = {https://mlanthology.org/cvprw/2019/liu2019cvprw-live-a/}
}