MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution
Abstract
We propose the width-resolution mutual learning method (MutualNet) to train a network that is executable at dynamic resource constraints to achieve adaptive accuracy-efficiency trade-offs at runtime. Our method trains a cohort of sub-networks with different widths using different input resolutions to mutually learn multi-scale representations for each sub-network. It achieves consistently better ImageNet top-1 accuracy over the state-of-the-art adaptive network US-Net under different computation constraints, and outperforms the best compound scaled MobileNet in EfficientNet by 1.5%. The superiority of our method is also validated on COCO object detection and instance segmentation as well as transfer learning. Surprisingly, the training strategy of MutualNet can also boost the performance of a single network, which substantially outperforms the powerful AutoAugmentation in both efficiency (GPU search hours: 15000 vs. 0) and accuracy (ImageNet: 77.6% vs. 78.6%). Code is provided in supplementary material.
Cite
Text
Yang et al. "MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58452-8_18Markdown
[Yang et al. "MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/yang2020eccv-mutualnet/) doi:10.1007/978-3-030-58452-8_18BibTeX
@inproceedings{yang2020eccv-mutualnet,
title = {{MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution}},
author = {Yang, Taojiannan and Zhu, Sijie and Chen, Chen and Yan, Shen and Zhang, Mi and Willis, Andrew},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58452-8_18},
url = {https://mlanthology.org/eccv/2020/yang2020eccv-mutualnet/}
}