DiffChaser: Detecting Disagreements for Deep Neural Networks
Abstract
The platform migration and customization have become an indispensable process of deep neural network (DNN) development lifecycle. A high-precision but complex DNN trained in the cloud on massive data and powerful GPUs often goes through an optimization phase (e.g, quantization, compression) before deployment to a target device (e.g, mobile device). A test set that effectively uncovers the disagreements of a DNN and its optimized variant provides certain feedback to debug and further enhance the optimization procedure. However, the minor inconsistency between a DNN and its optimized version is often hard to detect and easily bypasses the original test set. This paper proposes DiffChaser, an automated black-box testing framework to detect untargeted/targeted disagreements between version variants of a DNN. We demonstrate 1) its effectiveness by comparing with the state-of-the-art techniques, and 2) its usefulness in real-world DNN product deployment involved with quantization and optimization.
Cite
Text
Xie et al. "DiffChaser: Detecting Disagreements for Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/800Markdown
[Xie et al. "DiffChaser: Detecting Disagreements for Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/xie2019ijcai-diffchaser/) doi:10.24963/IJCAI.2019/800BibTeX
@inproceedings{xie2019ijcai-diffchaser,
title = {{DiffChaser: Detecting Disagreements for Deep Neural Networks}},
author = {Xie, Xiaofei and Ma, Lei and Wang, Haijun and Li, Yuekang and Liu, Yang and Li, Xiaohong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {5772-5778},
doi = {10.24963/IJCAI.2019/800},
url = {https://mlanthology.org/ijcai/2019/xie2019ijcai-diffchaser/}
}