Visualizing Linear RNNs Through Unrolling
Abstract
Neural networks are revolutionizing artificial intelligence (AI), but suffer from poor explainability; for example, recurrent neural networks (RNNs) hold massive potential for sequential or real-time information processing, but their recurrences exacerbate explainability issues and make understanding or predicting RNN behavior difficult. One way to explain neural networks is SplineCam, which illustrates a 2D projection of a neural network’s analytical form—however, it does not natively support RNNs. We circumvent this limitation by using linearly-recurrent RNNs, which can be unrolled into feedforward networks. We apply the resulting method, dubbed SplineCam-Linear-RNN, to linearly-recurrent RNNs trained on biosignal data and sequential MNIST. Our procedure enables: (1) unprecedented visualization of the decision boundary and complexity of an RNN, and (2) visualization of the frequency sensitivity of RNNs around individual data points.
Cite
Text
Casco-Rodriguez et al. "Visualizing Linear RNNs Through Unrolling." NeurIPS 2024 Workshops: LXAI, 2024.Markdown
[Casco-Rodriguez et al. "Visualizing Linear RNNs Through Unrolling." NeurIPS 2024 Workshops: LXAI, 2024.](https://mlanthology.org/neuripsw/2024/cascorodriguez2024neuripsw-visualizing/)BibTeX
@inproceedings{cascorodriguez2024neuripsw-visualizing,
title = {{Visualizing Linear RNNs Through Unrolling}},
author = {Casco-Rodriguez, Josue and Burley, Tyler and Barberan, Cj and Humayun, Ahmed Imtiaz and Balestriero, Randall and Baraniuk, Richard},
booktitle = {NeurIPS 2024 Workshops: LXAI},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/cascorodriguez2024neuripsw-visualizing/}
}