Data-Free Knowledge Distillation for Object Detection
Abstract
We present DeepInversion for Object Detection (DIODE) to enable data-free knowledge distillation for neural networks trained on the object detection task. From a data-free perspective, DIODE synthesizes images given only an off-the-shelf pre-trained detection network and without any prior domain knowledge, generator network, or pre-computed activations. DIODE relies on two key components--first, an extensive set of differentiable augmentations to improve image fidelity and distillation effectiveness. Second, a novel automated bounding box and category sampling scheme for image synthesis enabling generating a large number of images with a diverse set of spatial and category objects. The resulting images enable data-free knowledge distillation from a teacher to a student detector, initialized from scratch. In an extensive set of experiments, we demonstrate that DIODE's ability to match the original training distribution consistently enables more effective knowledge distillation than out-of-distribution proxy datasets, which unavoidably occur in a data-free setup given the absence of the original domain knowledge.
Cite
Text
Chawla et al. "Data-Free Knowledge Distillation for Object Detection." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Chawla et al. "Data-Free Knowledge Distillation for Object Detection." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/chawla2021wacv-datafree/)BibTeX
@inproceedings{chawla2021wacv-datafree,
title = {{Data-Free Knowledge Distillation for Object Detection}},
author = {Chawla, Akshay and Yin, Hongxu and Molchanov, Pavlo and Alvarez, Jose},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {3289-3298},
url = {https://mlanthology.org/wacv/2021/chawla2021wacv-datafree/}
}