Shoot to Know What: An Application of Deep Networks on Mobile Devices

Abstract

Convolutional neural networks (CNNs) have achieved impressive performance in a wide range of computer vision areas. However, the application on mobile devices remains intractable due to the high computation complexity. In this demo, we propose the Quantized CNN (Q-CNN), an efficient framework for CNN models, to fulfill efficient and accurate image classification on mobile devices. Our Q-CNN framework dramatically accelerates the computation and reduces the storage/memory consumption, so that mobile devices can independently run an ImageNet-scale CNN model. Experiments on the ILSVRC-12 dataset demonstrate 4~6x speed-up and 15~20x compression, with merely one percentage drop in the classification accuracy. Based on the Q-CNN framework, even mobile devices can accurately classify images within one second.

Cite

Text

Wu et al. "Shoot to Know What: An Application of Deep Networks on Mobile Devices." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.9831

Markdown

[Wu et al. "Shoot to Know What: An Application of Deep Networks on Mobile Devices." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/wu2016aaai-shoot/) doi:10.1609/AAAI.V30I1.9831

BibTeX

@inproceedings{wu2016aaai-shoot,
  title     = {{Shoot to Know What: An Application of Deep Networks on Mobile Devices}},
  author    = {Wu, Jiaxiang and Hu, Qinghao and Leng, Cong and Cheng, Jian},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {4399-4400},
  doi       = {10.1609/AAAI.V30I1.9831},
  url       = {https://mlanthology.org/aaai/2016/wu2016aaai-shoot/}
}