Width Transfer: On the (In)variance of Width Optimization

Abstract

Optimizing the channel counts for different layers of a CNN has shown great promise in improving the efficiency of CNNs at test-time. However, these methods often introduce large computational overhead (e.g., an additional 2× FLOPs of standard training). Minimizing this overhead could therefore significantly speed up training. In this work, we propose width transfer, a technique that harnesses the assumptions that the optimized widths (or channel counts) are regular across sizes and depths. We show that width transfer works well across various width optimization algorithms and networks. Specifically, we can achieve up to 320× reduction in width optimization overhead without compromising the top-1 accuracy on ImageNet, making the additional cost of width optimization negligible relative to initial training. Our findings not only suggest an efficient way to conduct width optimization, but also highlight that the widths that lead to better accuracy are invariant to various aspects of network architectures and training data.

Cite

Text

Chin et al. "Width Transfer: On the (In)variance of Width Optimization." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00334

Markdown

[Chin et al. "Width Transfer: On the (In)variance of Width Optimization." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/chin2021cvprw-width/) doi:10.1109/CVPRW53098.2021.00334

BibTeX

@inproceedings{chin2021cvprw-width,
  title     = {{Width Transfer: On the (In)variance of Width Optimization}},
  author    = {Chin, Ting-Wu and Marculescu, Diana and Morcos, Ari S.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {2990-2999},
  doi       = {10.1109/CVPRW53098.2021.00334},
  url       = {https://mlanthology.org/cvprw/2021/chin2021cvprw-width/}
}