Mechanistic Interpretability of Binary and Ternary Transformer Networks
Abstract
Recent research (Wang et al., 2023; Ma et al.,2024) has proposed binary and ternary transformer networks as a way to significantly reduce memory and improve inference speed in Large Language Models (LLMs) while maintaining accuracy. In this work, we apply techniques from mechanistic interpretability to investigate whether such networks learn distinctly different or similar algorithms when compared to full-precision transformer networks. In particular, we reverse engineer the algorithms learned for the toy problem of modular addition where we find that binary and ternary networks learn similar algorithms as full precision networks. This provides evidence against the possibility of using binary and ternary networks as a more interpretable alternative in the LLM setting.
Cite
Text
Li. "Mechanistic Interpretability of Binary and Ternary Transformer Networks." ICML 2024 Workshops: MI, 2024.Markdown
[Li. "Mechanistic Interpretability of Binary and Ternary Transformer Networks." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/li2024icmlw-mechanistic/)BibTeX
@inproceedings{li2024icmlw-mechanistic,
title = {{Mechanistic Interpretability of Binary and Ternary Transformer Networks}},
author = {Li, Jason},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/li2024icmlw-mechanistic/}
}