Visualizing Neural Networks with the Grand Tour
Abstract
Distill articles are interactive publications and do not include traditional abstracts. This summary was written for the ML Anthology. Demonstrates the Grand Tour, a linear dimensionality reduction technique for visualizing neural network behavior that preserves interpretable structure better than non-linear alternatives like t-SNE. Applies it to observe training dynamics, trace data flow through layers, and analyze adversarial examples.
Cite
Text
Li et al. "Visualizing Neural Networks with the Grand Tour." Distill, 2020. doi:10.23915/distill.00025Markdown
[Li et al. "Visualizing Neural Networks with the Grand Tour." Distill, 2020.](https://mlanthology.org/distill/2020/li2020distill-visualizing/) doi:10.23915/distill.00025BibTeX
@article{li2020distill-visualizing,
title = {{Visualizing Neural Networks with the Grand Tour}},
author = {Li, Mingwei and Zhao, Zhenge and Scheidegger, Carlos},
journal = {Distill},
year = {2020},
doi = {10.23915/distill.00025},
url = {https://mlanthology.org/distill/2020/li2020distill-visualizing/}
}