Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors

Abstract

We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors---the quadratic graph Laplacian regularizer (GLR) and the $\ell_1$-norm graph total variation (GTV)---subject to an interpolation constraint. The crucial insight is that a normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers. Unlike "black-box" transformers that require learning of large key, query and value matrices to compute scaled dot products as affinities and subsequent output embeddings, resulting in huge parameter sets, our unrolled networks employ shallow CNNs to learn low-dimensional features per node to establish pairwise Mahalanobis distances and construct sparse similarity graphs. At each layer, given a learned graph, the target interpolated signal is simply a low-pass filtered output derived from the minimization of an assumed graph smoothness prior, leading to a dramatic reduction in parameter count. Experiments for two image interpolation applications verify the restoration performance, parameter efficiency and robustness to covariate shift of our graph-based unrolled networks compared to conventional transformers.

Cite

Text

Do et al. "Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors." Neural Information Processing Systems, 2024. doi:10.52202/079017-0206

Markdown

[Do et al. "Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/do2024neurips-interpretable/) doi:10.52202/079017-0206

BibTeX

@inproceedings{do2024neurips-interpretable,
  title     = {{Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors}},
  author    = {Do, Tam Thuc and Eftekhar, Parham and Hosseini, Seyed Alireza and Cheung, Gene and Chou, Philip A.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0206},
  url       = {https://mlanthology.org/neurips/2024/do2024neurips-interpretable/}
}