InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-Based Transformer
Abstract
This work explores optimizing transformer-based language models by integrating model compression techniques with inhibitor attention, a novel alternative attention mechanism. Inhibitor attention employs Manhattan distances and ReLU activations instead of the matrix multiplications and softmax activation of the conventional scaled dot-product attention. This shift offers potential computational and energy savings while maintaining model effectiveness. We propose further adjustments to improve the inhibitor mechanism's training efficiency and evaluate its performance on the DistilBERT architecture. Our knowledge distillation experiments indicate that the modified inhibitor transformer model can achieve competitive performance on standard NLP benchmarks, including General Language Understanding Evaluation (GLUE) and sentiment analysis tasks.
Cite
Text
Zhang and Brannvall. "InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-Based Transformer." ICLR 2025 Workshops: SLLM, 2025.Markdown
[Zhang and Brannvall. "InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-Based Transformer." ICLR 2025 Workshops: SLLM, 2025.](https://mlanthology.org/iclrw/2025/zhang2025iclrw-inhibidistilbert/)BibTeX
@inproceedings{zhang2025iclrw-inhibidistilbert,
title = {{InhibiDistilbert: Knowledge Distillation for a ReLU and Addition-Based Transformer}},
author = {Zhang, Tony and Brannvall, Rickard},
booktitle = {ICLR 2025 Workshops: SLLM},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/zhang2025iclrw-inhibidistilbert/}
}