Deformable Butterfly: A Highly Structured and Sparse Linear Transform

Abstract

We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions. It inherits the fine-to-coarse-grained learnable hierarchy of traditional butterflies and when deployed to neural networks, the prominent structures and sparsity in a DeBut layer constitutes a new way for network compression. We apply DeBut as a drop-in replacement of standard fully connected and convolutional layers, and demonstrate its superiority in homogenizing a neural network and rendering it favorable properties such as light weight and low inference complexity, without compromising accuracy. The natural complexity-accuracy tradeoff arising from the myriad deformations of a DeBut layer also opens up new rooms for analytical and practical research. The codes and Appendix are publicly available at: https://github.com/ruilin0212/DeBut.

Cite

Text

Lin et al. "Deformable Butterfly: A Highly Structured and Sparse Linear Transform." Neural Information Processing Systems, 2021.

Markdown

[Lin et al. "Deformable Butterfly: A Highly Structured and Sparse Linear Transform." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/lin2021neurips-deformable/)

BibTeX

@inproceedings{lin2021neurips-deformable,
  title     = {{Deformable Butterfly: A Highly Structured and Sparse Linear Transform}},
  author    = {Lin, Rui and Ran, Jie and Chiu, King Hung and Chesi, Graziano and Wong, Ngai},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/lin2021neurips-deformable/}
}