Visualizing Neural Network Imagination
Abstract
In certain situations, neural networks will represent environment states in their hidden activations. Our goal is to visualize what environment states the networks are representing. We experiment with a recurrent neural network (RNN) architecture with a decoder network at the end. After training, we apply the decoder to the intermediate representations of the network to visualize what they represent. We define a quantitative interpretability metric and use it to demonstrate that hidden states can be highly interpretable on a simple task. We also develop autoencoder and adversarial techniques and show that benefit interpretability.
Cite
Text
Wichers et al. "Visualizing Neural Network Imagination." ICML 2024 Workshops: MI, 2024.Markdown
[Wichers et al. "Visualizing Neural Network Imagination." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/wichers2024icmlw-visualizing/)BibTeX
@inproceedings{wichers2024icmlw-visualizing,
title = {{Visualizing Neural Network Imagination}},
author = {Wichers, Nevan and Tao, Victor and Volpato, Riccardo and Barez, Fazl},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/wichers2024icmlw-visualizing/}
}