Laughing Hyena Distillery: Extracting Compact Recurrences from Convolutions
Abstract
Recent advances in attention-free sequence models rely on convolutions as alternatives to the attention operator at the core of Transformers. In particular, long convolution sequence models have achieved state-of-the-art performance in many domains, but incur a significant cost during auto-regressive inference workloads -- naively requiring a full pass (or caching of activations) over the input sequence for each generated token -- similarly to attention-based models. In this paper, we seek to enable $\mathcal O(1)$ compute and memory cost per token in any pre-trained long convolution architecture to reduce memory footprint and increase throughput during generation. Concretely, our methods consist in extracting low-dimensional linear state-space models from each convolution layer, building upon rational interpolation and model-order reduction techniques. We further introduce architectural improvements to convolution-based layers such as Hyena: by weight-tying the filters across channels into heads, we achieve higher pre-training quality and reduce the number of filters to be distilled. The resulting model achieves 10x higher throughput than Transformers and 1.5x higher than Hyena at 1.3B parameters, without any loss in quality after distillation.
Cite
Text
Massaroli et al. "Laughing Hyena Distillery: Extracting Compact Recurrences from Convolutions." Neural Information Processing Systems, 2023.Markdown
[Massaroli et al. "Laughing Hyena Distillery: Extracting Compact Recurrences from Convolutions." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/massaroli2023neurips-laughing/)BibTeX
@inproceedings{massaroli2023neurips-laughing,
title = {{Laughing Hyena Distillery: Extracting Compact Recurrences from Convolutions}},
author = {Massaroli, Stefano and Poli, Michael and Fu, Dan and Kumbong, Hermann and Parnichkun, Rom and Romero, David and Timalsina, Aman and McIntyre, Quinn and Chen, Beidi and Rudra, Atri and Zhang, Ce and Ré, Christopher and Ermon, Stefano and Bengio, Yoshua},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/massaroli2023neurips-laughing/}
}